"No one is harder on a talented person than the person themselves" - Linda Wilkinson ; "Trust your guts and don't follow the herd" ; "Validate direction not destination" ;

August 15, 2016

Day #28 - R - Forecast Library Examples

Following Examples discussed. Library used - R - Forecast Library
  • Moving Average
  • Single Exponential Smoothing - Uses single smoothing factor
  • Double Exponential Smoothing - Uses two constants and is better at handling trends
  • Triple Exponential Smoothing - Smoothing factor, trend, seasonal factors considered
  • ARIMA

Happy Learning!!!

August 08, 2016

Applied Machine Learning Notes


Supervised Learning
  • Classification (Discrete Labels)
  • Regression (Output is continuous, Example - Age, Stock prices)
  • Past data + Past Outputs used
Unsupervised Learning
  • Dimensionality reduction (Data in higher dimensions, Remove dimension without losing lot of information)
  • Reducing dimensionality makes it easy for computation (Continuous values)
  • Clustering (Discrete labels)
  • No Past outputs, Only current data
Reinforcement Learning
  • All Game Playing is unsupervised
  • Learning Policy
  • Negative / Positive reward for each step
Type of Models
  • Inductive (Learn model, Learn from a function) vs Transductive (Lazy learning ex- Opinion from like minded people)
  • Online (Learn from every new incoming tweet) vs Offline (Look past 1 Yeat tweet)
  • Generative (Apply Gaussian on Data, Use ML and compute Mean / Variance) vs Discriminative (Two sides of Line)
  • Parametric vs Non-Parametric Models
Happy Learning!!!

July 31, 2016

Fifth Elephant Day #2

Fifth Elephant Day #2 - Part I

Session #1 - Content Marketing
  • Distribute relevant consistent content. Traditional vs Content Marketing
Challenges
  • Delivering content with speed. Channel proliferation (mobile, computers, tablets)
  • Intersection of Brands, Trends, Community Interests (Social media post and metrics)
  • Data from social media pages, online aggregators



Technical Details
  • Computation of term frequency, inverse document frequency
  • Using Solr, Lucene for Indexes
  • Cosine Similarity
  • Greedy Algorithm
Session #2 - Reasoning
  • Prediction vs Reasoning problem
  • Prediction Problems Evolution 
  • At Advanced level Deep Learning, XGBoost, Graphical models
When Apply prediction ?
Features as input -> Prediction performed (Independent, stateless)

Reasoning - Sequential, Stateful Exploration
Reasoning Problems - Diagnosis, routes, games, crossing roads

Flavours of Reasoning
  • Algorithmic (Search)
  • Logical reasoning
  • Bayesian probabilistic reasoning
  • Markovnian reasoning
Knowledge, Learning the process of reasoning, Knowledge graphs were should in implementation of reasoning
{subject, predicate, object}















Session #3 - Continuous online learning
  • 70% noise in C2B communication
  • 100% noise in B2C communication
  • Zipfian
Technicalities
  • Apriori - Market Basket Analysis
  • XGBoost - Alternative to DL
  • Bias - Variance Tradeoff
  • Spectral Clustering






Bird of Feathers Session
  • Google Deepmind (Used for Air conditioning)
  • Bayesian Probabilistic Learning
  • Deep Learning - Build Hierarchy of features (OCR type of problems)
  • Traditional Neural Network (Fully Connected, lot of degree of freedom)
  • Structural causality (Subsystem appears before, Domain knowledge)
  • Temporal causality - This and then that happened
  • CNN - learning weights
  • Spectral clustering
  • PCA (reduce denser to smaller)
  • Deep Learning - Hidden layers obtained through coarse grained process
Deep Learning workshop Notes
  • Neural Networks
  • Multiple Layers
  • Lots of data
People Involved - Hinton, Andrew Ng, Bengio, Lecuss

Deep Learning now
  • Speech recognition
  • Google Deep Models on Phone
  • Google street view (House numbers)
  • Imagenet
  • Captioning images
  • Reinforcement learning
Neural Networks
  • Simple mathematical units combine into complex functions
  • X-> input, W-> weights, Non linear function of output
Multiple Layers
  • Multiple hidden layers between input and output
  • Training hidden layers is challenge
Gradient Descent
  • Define loss function
  • Minimize by moving along gradient
Backpropagation
  • Move Errors back through the network
  • Chain rule conception
Tools
  • Cafee - Configuration file
  • Torch - Describe network in lue
  • Theano - Describes computation, writes cuda code, runs and gives results
CNN
  • Used for images
  • Images are organized
  • Apply Convolutional filter
  • For Deep Learning GPU is important
Imagenet Competition
  • Convolution (Have all nice features retain them)
  • Pooling (Shrink image)
  • Softmax
  • Other
Simplest RNN - Gradient Descent problem
LSTM (Long Short Term memory)
Interword relationships from corpus (word2vec)

Happy Learning!!!

July 28, 2016

Fifth Elephant Day #1 Notes - Part II

Sessions # - Link

Talk #3 - Machine Learning in FinTech
  • Lending Space
  • Credit underwriting system
India
  • 2% Credit card usage
  • 65% of population < 27 yrs
  • Digital foot print (mobile)
  • Identity (Aadhar)
40 Decisions / Minute -> 100 Crores a month

Use Cases / Scenarios
  • Truth Score (Validity of address / person / sources)
  • Need Score (Urgency / Time to respond application)
  • Saver Score (cash flow real-time analytics)
  • Credit Score (Debt to income)
  • Credit awareness score
  • Continuous risk assessments
Talk #4 - Driving Behaviour from Smartphone Sensors
  • For Safety driving using smartphone sensors
  • Spatial / location data
  • Road traffic injuries due to distracted driving
  • Phone usage - 4x crash risk
  • Speedy driving - 45% car crash history
  • Driving behavior analysis / driving feedback
  • GPS + Inertial Navigational sensors (Accelerometer / Gyroscope / Magnetometer)
Characterization
  • Drive detection
  • Event detection
  • Collision detection
Qualification
  • Drive summarization and scoring
  • Risk modelling
Optimization
  • Events, location of events, duration of events
Dynamics
  • Sensors
  • Availability - wide variety across devices
  • Raw Data - noisy, unevenly spaced time series
  • Events - Time scales, combination of sensors
  • Model building - Labelled vs unlabelled data, feature engineering
  • Algorithms - Stream / batch efficiency
Techniques
  • Cluster data 
  • Eliminated uninteresting time periods
  • Classification / Regression models
  • Spectral clustering
Talk #5 - Indian Agriculture
  • Crop rotation literacy
  • Data curation, Query tools on data product
  • Visualization and plotting of Agricultural data
Tak #6 and #7 - Last two talks were from Ecologists
  • Using Image comparison for Big Cat Counting
  • Predicting Big Cat Areas (Territories)
  • Observe Nature, Frame Hypothesis, Design Experiments
  • Confront with competing hypothesis
  • Spacegap program
  • Markov chain Monte-Carlo technique


Happy Learning!!!

Fifth Elephant Day #1 Notes - Part I

Sessions # - Link

Talk #1 - Data for Genomic Analysis

Great talk by Ramesh. I had attended his session / technical discussion earlier. This session provided insights on genome / discrepancies in genome sequence leading to rare diseases.

Genome - 3 Billion X 2 Characters
Character variables varies from person to person
Stats (1/10th of probability of cancer)
Baseline risk for breast cancer (1/8),(1/70) ovarian cancer
BRCA1 mutation (5-6 fold increase in breast cancer, 27 fold increase for ovarian cancer)

In India
  • 35% inherited risk mutation
  • 1/25 Thalassemia 
  • 1 in 400-900 Retinitis Pigmentosa
  • 1 in 500, Hypertrophic Cardiomyopathy
Data Processing
  • 1 Billion reads - 100GB data per person
  • Very similar sequence yet one character might differ
  • But reference is 3 Billion long
Efficiency
  • Need fast indexing
  • Suffix Trees and variations
  • Hash table based approaches
Reference Genome Sequence
  • Volume of data
  • Funnel down of variety of dimensions
  • Triplet Code (Molecule)
  • Variants of Triplets nailed down to difference of gnome
  • GPU processing / reduce computation time
Concepts Discussed / Used
  • Hypothesis Testing
  • Stats Models
  • GPU Processing to reduce computation time
They also provide assessment for hereditary diseases at corporate level.

Talk #2 - Alternative to Wall Street Data

This session gave me some new strategies to collect / analyze data

How to Identify occupancy rate at hotel ?
  •  Count of cars from parking lots
  •  Number of rooms lights on
  •  Take pics of rooms from corner of street and predict based on images collected
  •  Unconventional ways to think of data collection (Beating the wall street model)
What are usual ways
  •  Checking websites
From Investor perspective lodging key metrics is a very important aspect
Data Sources
  • Direct data gathering
  • Web harvesting
  • Primary research
Primary Research
  • Look at notice patterns in front of you
  • Difference in invoice numbers
  • Serial number changes, difference values
Free Data Sets in link
Lot of opportunity
  • Analyze international markets (India / China)
  • COGS
  • SG
  • ETC
How to value data sets ?
  • Scarcity - How widely used
  • Granularity - Time / aggregation level
  • Structured
  • Coverage



What is the generative value
  • Revenue Surprise Estimates
  • Dataset insight / Analysis
  • Operating GAAP measures
A Great case study on impact of smart watch vs luxury watch was presented ? This session provides great insight into unconventional data collection ways
  • Generate money in automated system
  • Stock sensitivity to revenue surprises
  • Identify underlying ground truth
"Some Refreshing changes to world of investment"

Happy Learning!!!

July 24, 2016

June 17, 2016

Good Read - Design Patterns



Happy Learning!!!

June 15, 2016

Day #26 - R - Moving Weighted Average

Example code based on two day workshop on Azure ML module. Simple example storing and accessing data from Azure workspace



Happy Learning!!!

June 01, 2016

Day #25 - Data Transformations in R

This post is on performing Data Transformations in R. This would be part of feature modelling. Advanced PCA will be done during later stages



Data Normalization in Python

Happy Learning!!!

May 20, 2016

Day #24 - Python Code Examples

Examples for - for loop, while loop, dictionary, function examples and plotting graphs Happy Learning!!

Day #23 - Newton Raphson - Gradient Descent

Newton Raphson
  • Optimization Technique
  • Newton's method tries to find a point x satisfying f'(x) = 0
  • Between two successive approximations
  • Stop iteration when difference between x(n+1) and x(n) is close to zero
Formula
  • x(n+1) = x(n) - (f(x)/f'(x))
  • Choose suitable value for x0
Gradient Descent
  • Works for convex function
  • x(n+1) = x(n) - af'(x)
  • a - learning rate
  • Gradient descent tries to find such a minimum x by using information from the first derivative of f
  • Both gradient and netwon raphson are similar the update rule is different
More Reads - Link

Happy Learning!!!

May 14, 2016

Day #22 - Data science - Maths Basics


Eigen Vector - Vector along which there is no change in direction

Eigen Value - Amount of Scaling factor defined by Eigen value

Eigen Value Decomposition - Only Square matrix can be performed Eigen Decomposition

Trace - Sum of Eigen Values

Rank of A - Number of Non-Zero Eigen Values

SVD - Singular Value Decomposition
  • Swiss Army Knife of Linear Algebra
  • SVD - for Stock market Prediction
  • SVD - for Data Compression
  • SVD - to model sentiments
  • SVD is Greatest Gift of Linear Algebra to Data Science
  • Square Root of (Eigen Values of AtA) - A Transpose A, becomes Singular Value of
Happy Learning!!! (Revise  - Relearn - Practice)

May 09, 2016

Day #21 - Data Science - Maths Basics - Vectors and Matrices

Matrix - Combination of rows and columns
Check for Linear Dependence - R2 = R2 - 2R1, When one of the rows is all zeros it is linearly dependent
Span - Linear combination of vectors
Rank - Linearly Independent set

Good Related Read - Span

Vector Space - Space of vectors, collection of many vectors
If V,W belong to space, V+W also belongs to space, multiplied vector will lie in R Square
If the determinant is non-zero, then the vectors are linearly independent. Otherwise, they are linearly dependent

Vector space properties
  • Commutative  x+y = y+x
  • Associative (x+y)+z = x+(y+z)
  • Origin vector - Vector will all zeros, 0+x = x+0 = x
  • Additive (Inverse) - For every X there exists -x such that x+(-x) = 0
  • Distributivity of scalar sum, r(x+s) = rx+rs
  • Distributivity of vector sum, r(x+s) = rx+rs
  • Identity multiplication, 1*x = x
Subspace
Vector Space V, Subset W. W is called subspace of V
Properties
W is subspace in following conditions
  • Zero vector belongs to W 
  • if u and v are vectors, u+v is in W (closure under +)
  • if v is any vector in W, and c is any real number, c.v is in W
Subset S belongs to V can be represnted as linear combination
 v = r1v1+ r2v2+... rkvk
v1,v2 distinct vectors from S, r belongs to R

Basis - Linearly Independent spanning set. Vector space is called basis if every vector in the vector space is a linear combination of set. All basis for vector V same cardinality

Null Space, Row Space, Column Space
Let A be m x n matrix
  • Null Space - All solutions for Ax = 0, Null space of A, denoted by Null A, is set of all homogenous solution for Ax=0
  • Row Space - Subspace of R power N spanned by row vectors is called Row Space
  • Column Space -  Subspace of R power N spanned by column vector is called Column Space
Norms - Measure of length and magnitude
  • For (1,-1,2), L1 Norm = Absolute value = 1+1+2 = 4
  • L1 - Same Angle
  • L2 - Plane
  • L3 - Sum of vectors in 3D space
  • L2 norm (5,2) = 5*5+2*2 = 29
  • L infinity - Max of (5,2) = 5
Orthogonal - Dot product equals Zero
Orthogonality - Linearly Independent, perpendicular will be linearly independent
Orthogonal matrix will always have determinant +/-1

Happy Learning!!!

May 08, 2016

Day #20 - PCA basics

Machine Learning Algorithms adjusts itself based on the input data set. Very different from traditional rules based / logic based systems. The capability to tune itself and work according to changing data set makes it self-learning / self-updating systems. Obviously, the inputs / updated data would be supplied by humans.

Basics
  • Line is unidirectional, Square is 2D, Cube is 3D
  • Fundamentally shapes are just set of points
  • For a N-dimensional space it is represented in N-dimensional hypercube
Feature Extraction
  • Converting a feature vector from Higher to lower dimension
PCA (Principal Component Analysis)
  • Input is a large number of correlated variables We perform Orthogonal transformation, convert them into uncorrelated variables. We identify principal components based on highest variation
  • Orthogonal vector - Dot product equals zero. The components perpendicular to each other
  • This is achieved using SVD (Single Value Decomposition)
  • SVD internally solves the matrix and identifies the Eigen Vectors
  • Eigen vector does not change direction when linear transformation is applied
  • PCA is used to explain variations in data. Find principal component with largest variation, Direction with next highest variation (orthogonal for first PCA)
  • Rotation or Reflection is referred as Orthogonal Transformation
  • PCA - Use components with high variations
  • SVD - Express Data as a Matrix
More Reads

Happy Learning!!!

May 03, 2016

Day #19 - Probability Basics

Concepts
  • Events - Subset of Sample Space
  • Sample Space - Set of all possible outcomes
  • Random Variable - Outcome of experiment captured by Random variable
  • Permutation - Ordering matters
  • Combination - Ordering does not matter
  • Binomial - Only two outcomes of trail
  • Poisson - Events that take place over and over again. Rate of Event denoted by lambda
  • Geometric - Suppose you'd like to figure out how many attempts at something is necessary until the first success occurs, and the probability of success is the same for each trial and the trials are independent of each other, then you'd want to use the geometric distribution
  • Conditional Probability - P(A Given B) = P(A) will occur assume B has already occurred
  • Normal Distribution - Appears because of central limit theorem (Gaussian and Normal Distribution both are same)
From Quora -  
"Consider a binomial distribution with parameters n and p. The distribution is underlined by only two outcomes in the run of an independent trial- success and failure. A binomial distribution converges to a Poisson distribution when the parameter n tends to infinity and the probability of success p tends to zero. These extreme behaviours of the two parameters make the mean constant i.e. n*p = mean of Poisson distribution "

May 01, 2016

Day #18 - Linear Regression , K Nearest Neighbours

Linear Regression
  • Fitting straight line to set of data points
  • Create line to predict new values based on previous observations
  • Uses OLS (Ordinary Least Squares). Minimize squared error between each point and line
  • Maximum likelihood estimation
  • R squared - Fraction of total variation in Y
  • 0 - R Squared - Terrible
  • 1 - R Squared is good
  • High R Squared good fit

KNN
  • Supervised Machine Learning Technique
  • New Data point classify based on distance between existing points
  • Choice of K - Small enough to pick neighbours
  • Determine value of K based on trial tests
  • K nearest neighbours on scatter plot and identify neighbours

Related Read
Recommendation Algo Analysis
Linear Regression

Happy Learning!!!

April 22, 2016

Day #17 - Python Basics

Happy Learning!!!

Neural Networks Basics


Notes from Session
  • Neurons - Synapses. Model brain at high level
  • Machine Learning  - Algorithms for classification and prediction
  • Mimic brain structure in technology
  • Recommender engines use neural networks
  • With more data we can increase accuracy of models
  • Linear Regression, y = mx + b. Fit data set with little error possible.
Neural Network
  • Equation starts from neuron
  • Multiply weights to inputs (Weights are coefficients)
  • Apply activation function (Depends on problem being solved)
Basic Structure
  • Input Layer
  • Hidden Layer (Multiple hidden layers) - Computation done @ hidden layer
  • Output Layer
  • Supervised learning (Train & Test)
  • Loss function determines how error looks like
  • Deep Learning - Automatic Feature Detection


Happy Learning!!!

April 14, 2016

Basics - SUPPORT VECTOR MACHINES

Good Reading from link

Key Notes
  • Allow non-linear decision boundaries
  • SVM - Out of box supervised learning technique
  • Feature Space - Finite dimensional vector space
  • Each dimension represents feature
  • Goal of SVN - Train a model that assigns unseen objects into particular category
  • Creates linear partition of feature space
  • Based on features it places above or below separation linear
  • No stochastic element involved (No involvement of any previous state status)
  • support vector classifiers or soft margin classifiers - allows some observations to be on in-correct side of hyperplane allowing soft margin
Advantage
  • High Dimensionality, Memory Efficiency, Versatility
Disadvantages
  • Non probabilistic
More Reads

Happy Learning!!!

Day #16 - Python Basics

Happy Learning!!!