Difference between Dependency Management and Dependencies in Maven

Maven is a software management tool, use to manage information, dependencies and other things for a project.

It has two mechanisms to add the dependencies of other modules/project. One is Dependencies tag and other is Dependency Managment. People often wonder whats a difference between the two. An important question is when to use what?

First, of all, we should have an idea of what is multi module applications.As in the case in case of multi module applications only they differ.

A multi-module project is, as its name suggests, a project that consists of multiple modules, where a module is a project. You can think of a multi-module project as a logical grouping of sub-projects. The packaging of a multi-module project is “pom” since it consists of a pom.xml file but no artifact (jar, war, ear, etc).

multimodule-web-spring_projects

That’s a technical story. But it is not great if we are trying to teach some one. As concepts should be as simple as a story for a 12-year-old child. So let’s understand dependencies and dependency management.

There is a man called Peter, he has two ice cream parlors, Gelatos, and Baskin Robbins. Peter has two children Ron and Seria.

Peter has 2 ice cream in gelatos ice cream parlor – mango[ basic version ] and straberry[ moderate version] and 1 ice cream in Baskin Robbins – black current[high version].

Both Ron and Seria can have all the ice creams which their father has in Gelatos parlor, but they can have ice from Baskin Robbins only if they ask for it.Which means they can only have black current if they ask for specific it, but they don’t have to mention which version of black current they need, as their father Peter already know which kind of black current ice we have.

Here Peter is parent module and,  Ron and Seria are child modules.Gelatos is dependencies tag and Baskin Robin is Dependency Management tag.

screen-shot-2015-06-28-at-18-18-31-480x275

So all the dependencies present in dependencies tag will be available to all its child modules.But all the dependencies present in dependency management of parent module will be available to the child only if those dependencies are declared in dependencies tag of child module.

So why are we even using dependency management if dependencies tag passes all the dependency to its children?

  1. As all the children might not need all the dependencies present in parent module, so it is always wise to use Dependency Managment.
  2. Using Dependency Managment we can create a consistency of which version and tag of any artifact which we are using through out the application.[It help to maintain the version of artifact]
  3. Dependency Managment manage version, scope, and exclusion of artifact in child modules.

Example –

Parent POM- A

</pre>
<pre><?xml version="1.0" encoding="UTF-8"?>
<project xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xmlns="http://maven.apache.org/POM/4.0.0"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <groupId>com.test</groupId>
    <artifactId>A</artifactId>
    <packaging>pom</packaging>
    <version>1.0-SNAPSHOT</version>
    <modules>
        <module>B</module>
        <module>C</module>
    </modules>

    <dependencies>
            <dependency>
            <groupId>com.external</groupId>
            <artifactId>d1</artifactId>
            <version>1</version>
        </dependency>
    </dependencies>
    <dependencyManagement>
        <dependencies>
            <dependency>
                <groupId>com.external</groupId>
                <artifactId>d2</artifactId>
                <version>1</version>
            </dependency>
            <dependency>
                <groupId>com.external</groupId>
                <artifactId>d3</artifactId>
                <version>1</version>
            </dependency>
        </dependencies>

    </dependencyManagement>
</project></pre>
<pre>

It has 2 child module A and B. Parent A has 3 dependencies in total:-

  • d1 [Inside dependencies tag]
  • d2 [Inside dependency management tag]
  • d3 [Inside dependency management tag]

 

 Child Pom B

</pre>
<pre><?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <parent>
        <groupId>com.test</groupId>
        <artifactId>A</artifactId>
        <version>1.0-SNAPSHOT</version>
    </parent>
    <modelVersion>4.0.0</modelVersion>
    <groupId>com.test</groupId>
    <artifactId>B</artifactId>
    <packaging>pom</packaging>

    <dependencies>
        <dependency>
            <groupId>com.external</groupId>
            <artifactId>d2</artifactId>
        </dependency>
        <dependency>
            <groupId>com.external</groupId>
            <artifactId>d4</artifactId>
            <version>1</version>
        </dependency>
    </dependencies>
</project></pre>
<pre>

It will have access to 3 artifacts

  • d1 [coming from dependencies of parent ]
  • d2 [coming from  parent dependency management as mention in its dependencies tag]
  • d4 [coming from its dependencies tag]

Note:- It will not have the d3 artifact, as d3 is mention dependency management tag of parent pom [POM A] but not present in child pom [POM B].

Child Pom C

</pre>
<pre><?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <parent>
        <groupId>com.test</groupId>
        <artifactId>A</artifactId>
        <version>1.0-SNAPSHOT</version>
    </parent>
    <modelVersion>4.0.0</modelVersion>
    <groupId>com.test</groupId>
    <artifactId>C</artifactId>
    <packaging>pom</packaging>

    <dependencies>
        <dependency>
            <groupId>com.external</groupId>
            <artifactId>d3</artifactId>
        </dependency>
        <dependency>
            <groupId>com.external</groupId>
            <artifactId>d5</artifactId>
            <version>1</version>
        </dependency>
    </dependencies>
</project></pre>
<pre>

Similarly, It will have access to 3 artifacts

  • d1 [coming from dependencies of parent ]
  • d3 [coming from  parent dependency management as mention in its dependencies tag]
  • d5 [coming from its dependencies tag]

Note:- It will not have the d3 artifact, as d3 is mention dependency management tag of parent pom [POM A] but not present in child pom [POM B].


 

Further Reading – http://maven.apache.org/guides/introduction/introduction-to-dependency-mechanism.html

 

Advertisements

Add Google Authentication using Firebase in React+Redux Application

Single page applications(SPA) are quite famous these days, they are easy to build thanks to all the available libraries and framework. Angular (by Google) and React(by Facebook) are the most famous options available to explore these days. Because of these, front-end applications are now easy to manage and maintain. But even if we are able to create this SPA using all the technologies, we still need some server side logic to persist our data, and most importantly we need authentication so that each user can perform an action in the scope they are authorized to do.

Below is the diagram which shows the authentication flow.

oauth_implicit

Quick Intro of Firebase

Formerly known as Google Cloud Messaging (GCM), Firebase Cloud Messaging (FCM) is a cross-platform solution for messages and notifications for Android, iOS, and web applications, which currently can be used at no cost.

Today we will try to build a simple SPA using React, Redux, and Firebase(it will provide the google authentication)

Prerequisite-

Installed Software

We will follow the https://github.com/jainamit333/react_google_authetication to walk through the development.

Branch Name:- vanilla

Create a directory name react-authentication

 mkdir google-authentication-react
 cd google-authentication-react

 

Create Directory Structure as follows

Screen Shot 2017-08-05 at 2.00.09 PM

Add following code to package.json 

{
"name": "google-authentication-react",
"version": "0.1.0",
"private": true,
"scripts": {
"start": "npm run build; node server/index.js",
"start-dev": "nodemon server/index.js",
"build": "webpack -p",
"build-dev": "webpack -w",
"build-sass": "node-sass -w ./client/styles/main.scss -o ./client/styles/mainSheet",
"test": "echo \"Error: no test specified\" &amp;amp;amp;amp;amp;amp;&amp;amp;amp;amp;amp;amp; exit 1",
"stats": "webpack --env production --profile --json &amp;amp;amp;amp;amp;gt; stats.json"
},
"dependencies": {
"axios": "^0.16.1",
"babel": "^6.5.2",
"babel-core": "^6.18.0",
"babel-loader": "^6.2.7",
"babel-preset-es2015": "^6.18.0",
"babel-preset-react": "^6.16.0",
"babel-preset-stage-2": "^6.24.1",
"body-parser": "^1.17.1",
"bootstrap": "^4.0.0-alpha.6",
"css-loader": "^0.28.0",
"express": "^4.15.2",
"firebase": "^4.2.0",
"muicss": "^0.9.20",
"node-sass": "^4.5.2",
"react": "^15.6.1",
"react-addons-css-transition-group": "^15.6.0",
"react-addons-transition-group": "^15.6.0",
"react-dom": "^15.6.1",
"react-redux": "^5.0.4",
"react-router": "^3.0.0",
"react-router-dom": "^4.1.2",
"reactstrap": "^4.8.0",
"redux": "^3.6.0",
"redux-logger": "^3.0.1",
"redux-thunk": "^2.2.0",
"sass-loader": "^6.0.3",
"style-loader": "^0.16.1",
"volleyball": "^1.4.1",
"webpack": "^2.7.0",
"webpack-livereload-plugin": "^0.10.0"
},
"devDependencies": {
"chai": "^3.5.0",
"cross-env": "^3.1.4",
"expose-loader": "^0.7.3",
"mocha": "^3.1.2",
"nodemon": "^1.11.0",
"react-hot-loader": "^1.3.1",
"supertest": "^2.0.1",
"supertest-as-promised": "^4.0.1",
"webpack-dashboard": "^0.4.0",
"webpack-dev-server": "^2.6.1"
}
}

At line number 26 we have added the dependency for firebase.

Note:- This package.json may contain many extra dependencies as I have extracted it from my other project just for tutorial purpose.

Install all the added dependencies

npm install

Add web pack Config code


const path = require('path');
const LiveReloadPlugin = require('webpack-livereload-plugin');
const webpack = require('webpack');

module.exports = {
entry: './client/index.js',
output: {
filename: 'bundle.js',
path: path.resolve(__dirname, 'client/dist')
},
context: __dirname,
resolve: {
extensions: ['.js', '.jsx', '.json', '*']
},
devtool:'cheap-module-source-map',
devServer: {
inline: true,
contentBase: './dist',
port: 3001
},
module: {
rules: [{
test: /\.jsx?$/,
exclude: /(node_modules|bower_components)/,
loader: 'babel-loader',
options: {
presets: ['react', 'es2015','stage-2']
}
},
{
test: /\.scss$/,
use: [
'style-loader',
'css-loader',
'sass-loader'
]
}]
},
plugins: [
new webpack.DefinePlugin({

'process.env.COSMIC_BUCKET': JSON.stringify(process.env.COSMIC_BUCKET),
'process.env.COSMIC_READ_KEY': JSON.stringify(process.env.COSMIC_READ_KEY),
'process.env.COSMIC_WRITE_KEY': JSON.stringify(process.env.COSMIC_WRITE_KEY)
}),
new LiveReloadPlugin({appendScriptTag: true})

]
};

 

React and Redux related files

  • server/index.js
&amp;amp;amp;amp;lt;pre&amp;amp;amp;amp;gt;const express = require('express');
const app = express();
const path = require('path');
const volleyball = require('volleyball');
app.use(volleyball);
//serve up static files
app.use(express.static(path.resolve(__dirname, '..', 'client')));
app.use(express.static(path.resolve(__dirname, '..', 'node_modules')));
app.use(function (err, req, res, next) {
    console.error(err);
    console.error(err.stack);
    res.status(err.status || 500).send(err.message || 'Internal server error.');
});

// handle every other route with index.html, which will contain
// a script tag to our application's JavaScript file(s).
app.get('*', function (request, response) {
    response.sendFile(path.resolve(__dirname, '..', 'client', 'index.html'))
});
//listen on port 3000
app.listen(process.env.PORT || 3000, function () {
    console.log("Rockin' out on port 3000 homie");
});&amp;amp;amp;amp;lt;/pre&amp;amp;amp;amp;gt;

We are starting the project on port 3000.

  • redux/actions/actions.js
var constants = {
START_AUTHENTICATING:'START_AUTHENTICATING',
AUTHENTICATION_SUCCESSFUL:'AUTHENTICATION_SUCCESSFUL',
AUTHENTICATION_FAILED:'AUTHENTICATION_FAILED',
ALREADY_LOGIN:'ALREADY_LOGIN',
LOGOUT:'LOGOUT',
LOGOUT_SUCCESSFUL:'LOGOUT_SUCCESSFUL',
LOGOUT_ERROR:'LOGOUT_ERROR',
}

export default constants;

We have created constants for actions which will be supported for out dummy application

  • redux/actions/auth.js
import constant from './actions'

export const startAuth = keyWord =&amp;amp;amp;gt; {

return {
type: constant.START_AUTHENTICATING,
authenticated:false,
authenticating:true
}
}

export const alreadyLogin = response =&amp;amp;amp;gt; {

return{
type:constant.ALREADY_LOGIN,
authenticated:true,
authenticating:false,
user:response

}
}

export const authError = error =&amp;amp;amp;gt; {

return {
type: constant.AUTHENTICATION_FAILED,
authenticated:false,
authenticating:false,
error
}
}

export const authSuccess = response =&amp;amp;amp;gt; {
return {
type: constant.AUTHENTICATION_SUCCESSFUL,
user:response,
authenticated:true,
authenticating:false
}
}

These are the various activities that will be spawn during our login lifecycle.
We usually create a different actions file for a different piece of flow.

  • redux/reducers/auth.js
import constants from '../actions/actions'

const auth = (state = [], action) => {
switch (action.type) {

case constants.START_AUTHENTICATING:
return Object.assign({}, state, {
authenticated: false,
authenticating: true,
user: action.user
}, ...state)
break;
case constants.AUTHENTICATION_FAILED:
return Object.assign({}, state, {
authenticated: false,
authenticating: false,
user: {}
}, ...state)
break;
case constants.AUTHENTICATION_SUCCESSFUL:
return Object.assign({}, state, {
authenticated: true,
authenticating: false,
user: action.user
}, ...state)
break;

case constants.ALREADY_LOGIN:
return Object.assign({}, state, {
authenticated: true,
authenticating: false,
user: action.user
}, ...state)
break;
default:
return state
}

}
export default auth

As you can see, on the success we are setting user and setting param authenticated and authenticating accordingly in every case.
Other two param are mostly if we want to add loading bar during the login process.

 

  • redux/reducer.js
import {combineReducers} from "redux";
import auth from "./reducers/auth";

const googleBooks = combineReducers({
auth
})

export default googleBooks

This is the place where we combine all our reducer.
For now, we have only one main reducer which we are adding in line 5.

redux/store.js

import { createStore, applyMiddleware } from 'redux';
import reducer from './reducer';
import thunk from 'redux-thunk';
import {createLogger} from 'redux-logger';

const initialState = {
auth:{
authenticated:false,
authenticating:false,
user:{},
}
}

const store = createStore(
reducer,
initialState,
applyMiddleware(
createLogger(),
thunk
)

);
export default store;

We are creating a redux store and adding our combined reducer in line 15.

  • client/index.html
</pre>
<pre><!DOCTYPE html>
<html>
<head>
    <meta charset="UTF-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1" />
    <title>Books Around You</title>
    			<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css">
    <script src="https://code.jquery.com/jquery-3.2.1.min.js"
            integrity="sha256-hwg4gsxgFZhOsEEamdOYGBf13FyQuiTwlAQgxVSNgt4="
            crossorigin="anonymous"></script>

    <script src="/dist/bundle.js" defer></script>
</head>
<body>


<div id="root"></div>


</body>
</html></pre>
<pre>

Creating a placeholder for all our react component

  • client/index.js
</pre>
<pre>import React from 'react';
import ReactDOM from 'react-dom';
import {Provider} from 'react-redux';
import store from '../redux/store';
import Dashboard from "./components/Dashboard";

ReactDOM.render(
    <Provider store={store}>
        <Dashboard />
    </Provider>,
    document.getElementById('root')
);</pre>
<pre>

we are adding dashboard component as out main component which will be added to root div

  • client/component/Dashboard.js
</pre>
<pre>import React, {Component} from "react";
import {connect} from "react-redux";
import Navigation from "./Navigation";
import {login} from "../../services/firebase/auth";
import UserInfoPanel from "./UserInfoPanel";

class Dashboard extends Component {

    componentDidMount() {
        login()
    }

    render() {
        return (


<div>
                <Navigation />


<div className="mui-row">
</div>



<div className="mui-row">
                    <UserInfoPanel/>
</div>


</div>


        )
    }
}

function mapStateToProp(state) {
    return state;
}

export default connect(mapStateToProp)(Dashboard)</pre>
<pre>

We are calling login method on component did mount.
So it will check if we are already login or not before mounting this component

  • client/component/Navigation.js
</pre>
<pre>import React from "react";
import {connect} from "react-redux";
import {login, logout} from "../../services/firebase/auth";

class Navigation extends React.Component {

    render() {

        var styles = {

            marginTop:{
                marginTop:'10px'
            },
            baseColor:{
                color:'#a83808'
            }

        }
        return (
            <nav className="navbar navbar-default">


<div className="container-fluid">


<div className="navbar-header">
                        <a className="navbar-brand" href="#" >
                            <span  className="glyphicon glyphicon-bishop " aria-hidden="true" style={styles.baseColor}></span>
                        </a>
</div>



<ul className="nav navbar-nav navbar-right ">


	<li>
                            {this.props.authenticated && <span style={styles.marginTop} onClick={logout} className="btn btn-sm btn-danger">Logout</span>}
                            {!this.props.authenticated && <span style={styles.marginTop} onClick={login(this.props.dispatch)} className="btn btn-sm btn-danger">Login</span>}
</li>


</ul>


</div>


            </nav>
        );
    }
}

function mapStateToProps(state) {
    return {
        authenticated: state.auth.authenticated,
    }
}

export default connect(mapStateToProps)(Navigation)</pre>
<pre>

If the user is login LOGOUT button will render another wise LOGIN button.

  • client/component/UserInfoPanel.js
</pre>
<pre>import React from 'react';
import {connect} from "react-redux";

class UserInfoPanel extends React.Component {

    render() {
        const styles = {
            card:{
                width: '20em',
                position: 'relative',
                display: 'flex',
                flexDirection:'column',
                backgroundColor: '#fff',
                border: '1px solid rgba(0,0,0,.125)',
                borderRadius: '.25rem',
                padding:'3px',
                margin:'2px'
            }
        }
        return (


<div className="card" style={styles.card}>
                { this.props.authenticated && <img className="card-img-top img-thumbnail" src = {this.props.user.photoURL} alt="Card image cap"/> }
                { this.props.authenticated &&


<div className="card-block">


<h4 className="card-title">{this.props.user.displayName}</h4>


{this.props.user.email}

</div>


                }
</div>


        );
    }
}

function mapStateToProps(state) {
    return {

        authenticated: state.auth.authenticated,
        authenticating: state.auth.authenticating,
        user: state.auth.user
    }
}

export default connect(mapStateToProps)(UserInfoPanel)</pre>
<pre>

This component is to show the information of the login user.
In highlighted part, you can check we are mapping state param to props of the component.
This is a connected component.

Firebase related Files

  • services/firebase/config.js
import firebase from 'firebase'

const config = {

apiKey: "<your api key from google developer console.>",
authDomain: "<auth domain from firebase project eg >",
databaseURL: "<databse url from firebase>",
storageBucket: "<storage bucket from firebase>",

}

firebase.initializeApp(config);
export const provider = new firebase.auth.GoogleAuthProvider();
provider.addScope('https://www.googleapis.com/auth/plus.login')
export const firebaseAuth = firebase.auth

Replace the config part from your configs.
On line 12 we are initializing firebase.
On line, we are creating a google authentication provider.
Line no 14 states the scope of google api we are using, for now, we only need plus.login,
as we are using only from authentication.
When you need some other access also add other scopes to the same provider in next line.

  • services/firebase/auth.js
import { firebaseAuth, provider} from './config'
import {alreadyLogin, authError, authSuccess, startAuth} from "../../redux/actions/auth";
import store from '../../redux/store'

export function logout () {

return firebaseAuth().signOut()
}

function doLogin() {

firebaseAuth().signInWithPopup(provider).then(function(result) {
store.dispatch(authSuccess(result.user))

}).catch(function(error) {
store.dispatch(authError(error))
});
}

export function login () {

firebaseAuth().onAuthStateChanged((response) => {
if(response){
store.dispatch(alreadyLogin(response))
}else{
store.dispatch(startAuth())
doLogin()
}
});
return
}

In login method we are first checking of user is already login or not.
If not it will called do login method otherwise, it will called the action login already.

In doLogin method we are login usingn google provider.
If it is successful we are calling authSuccess action otherwise
authError action.

NOTE:- Since we are using Google authentication from firebase, it has to be enabled in firebase.Inside your firebase project go to authentication, go to sign-in-method and enable google provied.

Different dropout in Tensorflow

Dropout is a regularization technique for reducing overfitting in neural networks by preventing complex co-adaptations on training data. It is a very efficient way of performing model averaging with neural networks.

neural-networks-in-the-wild-handwriting-recognition-29-638

Why Dropout: Dropout helps prevent weights from converging to identical positions. It does this by randomly turning nodes off when forward propagating.

In simple terms some of your neurons will not participate in the calculation.

In Tensorflow we have two dropout functions.

  • tf.nn.dropout
    • It has parameter  keep_prop which state the probability of neuron which will not be drop. If we dive keep_prop value to 0.6, it means 60% of neurons will remain and 40% will be drop.
  • tf.layers.dropout
    • It has two main parameters, rate, and training. Rate means the number of neurons which will be drop.If the rate is 0.6 then 60% neurons will be dropped and 40% will be used.
    • We can see keep_prop = 1- rate.
    • Training parameter is to differentiate if the network is running for training or to get the result.We need dropout only when we are training the neural network, not while we are testing(inferencing) it.

So what is the difference between these two functions?

tf.layers.dropout is a wrapper over the tf.nn.droput, which give us the demarcation of whether to return the output in training mode (apply dropout) or in inference mode (return the input untouched).”

 

What Yagni Is?

Often developer debates a lot about what YAGNI is actually. Writing any new piece of code bind developer’s hand in the name of YAGNI.

I have worked in both product base company and service based company.I personally feel that YAGNI is more appreciated in the service based company, as in product based any extra feature is helpful.

People back YAGNI by saying “You don’t wipe before you shit.”.So true.We should not increase the scope of the problem as it may open the gates of new bugs.So YAGNI tells us “don’t write any extra/new code unless it is actually required”.

I agree with the last quoted statement.But should we bind our hands in implementing the requirement in a fashion that it should always be open for extensibility, or to write any functionality in a more generic way so that it can be reused any time in near future?

So, what my definition of YAGNI is: – Always write the code in the scope of the current requirement, but don’t cry out YAGNI if you want to solve it in the generic or extensible way.The code should close for modification but open for extensibility.

Try to solve problems in generic ways, that can be used in future.

Difference Between Generative and Discriminative machine learning

To understand these two models we first have to see what is the difference between joint probability [P(x,y)] and conditional probability[P(x|y)].

Joint probability:  p(A and B).  The probability of event A and event B occurring.  It is the probability of the intersection of two or more events.  The probability of the intersection of A and B may be written p(A ∩ B). Example:  the probability that a card is a four and red =p(four and red) = 2/52=1/26.  (There are two red fours in a deck of 52, the 4 of hearts and the 4 of diamonds).

Conditional probability:  p(A|B) is the probability of event An occurring, given that event B occurs. Example:  given that you drew a red card, what’s the probability that it’s a four (p(four|red))=2/26=1/13.  So out of the 26 red cards (given a red card), there are two fours so 2/26=1/13.

For better understanding, click here for more on probability.

generative algorithm models how the data was generated in order to categorize a signal. It asks the question: based on my generation assumptions, which category is most likely to generate this signal? Let’s say you have input data x and you want to classify the data into labels y. A generative model learns the joint probability distribution p(x,y). A generative algorithm models how the data was “generated”, so you ask it “what’s the likelihood this or that class generated this instance?” and pick the one with the better probability.

discriminative algorithm does not care about how the data was generated, it simply categorizes a given signal. Discriminative model learns the conditional probability distribution p(y|x) – which you should read as the probability of y given x. A discriminative algorithm uses the data to create a decision boundary, so you ask it “what side of the decision boundary is this instance on?

The fundamental difference between discriminative models and generative models is:

  • Discriminative models learn the (hard or soft) boundary between classes
  • Generative models model the distribution of individual classes

Given input data point x, the aim is to predict continuous (regression) or discrete (classification) output. That is given x, we are interested in modeling p(y|x). There are three approaches to this:

1. Generative Models:
One way is to model p(x, y) directly. Once we do that, we can obtain p(y|x) by simply conditioning on x. And we can then use decision theory to determine class membership i.e. we can use loss matrix, etc. to determine which class the point belongs to (such an assignment would minimize the expected loss). For e.g. in Naive Bayes model, you can learn p(y), the prior class probabilities from the data. You can also learn p(x|y) from the data using said maximum likelihood estimation (or you can Bayes estimator if you will). Once you have p(y) and p(x|y), p(x, y) is not difficult to find out.

2. Discriminative Models:
Instead of modeling p(x, y), we can directly model p(y|x), for e.g. in logistic regression p(y|x) is assumed to be of the form 1 / (1 + exp(-sigma(wi. xi))). All we have to do in such a case is to learn weights that would minimize the squared loss.

Generative models often outperform discriminative models on smaller datasets because their generative assumptions place some structure on your model that prevent overfitting. For example, let’s consider Naive Bayes vs. Logistic Regression. The Naive Bayes assumption is of course rarely satisfied, so logistic regression will tend to outperform Naive Bayes as your dataset grows (since it can capture dependencies that Naive Bayes can’t). But when you only have a small data set, logistic regression might pick up on spurious patterns that don’t really exist, so the Naive Bayes acts as a kind of regularizer on your model that prevents overfitting. There’s a paper by Andrew Ng and Michael Jordan on discriminative vs. generative classifiers that talks about this more.

Whenever an algorithm involves assuming, calculating or estimating the distribution of Y, it is generative, or simply put, if the algorithm cares about the distribution of Y, it is generative, if not, then it is discriminative.

Now a Small story to tell your 12-year-old kid, so that they can also understand the difference between these two models

Let’s say you have two kids “Gen” and “Dis”, and since their birth, they never opened their eyes. Today is the first day they will open their eyes, and you want to celebrate this occasion by teaching them the difference between Cat and Dog. You take them to pet store nearby.

Before showing around, you tell Gen and Dis to pay special attention to color, size, eye color, fur size, their voice etc.(feature set) of the pets they are going to see. After the end of this visit, you want to check if they understood the difference between cat and dog.

Now you give two photos one of a cat and one of a dog to Dis and ask which one is which. Dis has meticulously written down several conditions like if the voice sounds like meow and eyes are blue or green and has stripes with color brown or black then the animal is a cat. Thanks to her relatively simple rules, she quickly detected which one is a cat and which one is a dog.

Now instead of giving two photos you gave Gen two pieces of blank paper and ask her to draw what a cat and a dog looks like.

Well now, given any photo Gen can also tell which one is cat and which one is dog based on the drawing she created. In most cases drawing of cat and dog was unnecessary and time consuming for the task of detection which one is a cat.

But if there were only a few dogs and cats to look for Gen and Dis (low training data). In such cases if you give a photo of a brown dog with stripes with blue eyes, there is a chance that Dis would mark it as a cat. While Gen has her drawing and she can better detect that this photo is of a dog.

If you ask Gen to pay attention to more things(features), it will create a better sketch. But, if you show more examples(data-set) of cat and dog, Dis would mostly be better than Gen.

Since Dis is very meticulous in her observations if you ask her to pay attention to more features it will create more complicated rules(overfitting) and the chance of wrongly identifying a cat and a dog will increase, but that would not happen easily with Gen.

What if before going to pet store I don’t tell them that there are only two types of animal(no labeled data). Dis would fail completely because she will not know what to look for while Gen would be able to draw the sketch anyway. This is a huge advantage sometimes(semi-supervised).

Now let me reveal the suspense which you might already know: Dis is for discriminative and Gen is for generative.

Continue reading

Puppet – Introduction for Beginners

In our tech world, every day new tools and framework emerge to help us in our live.One of them is Puppet.A Configuration Management tool. This blog will be for who are trying to explore any configuration management tool or someone who is hearing a name puppet for the first time.Lets us explore what Puppet is actually. Why we need it, and how it works.

What is Configuration Management Tool?

Configuration management (CM) is a systems engineering process for establishing and maintaining consistency of a product’s performance, functional, and physical attributes with its requirements, design, and operational information throughout its life.

In simple term, if you have 100s of the production server and you have to upgrade the OS version in all the server or you have to install new software on all systems.Any configuration change required. Basically, your system admin helps you with this.

For more on CM click here.

Puppet Intro

Puppet is a pioneering configuration automation and deployment orchestration solution for distributed apps and infrastructure.

This open source configuration management solution is built with Ruby and offers custom Domain Specific Language (DSL) and Embedded Ruby (ERB) templates to create custom Puppet language files and offers a declarative paradigm programming approach. Puppet uses an agent/master architecture—Agents manage nodes and request relevant info from masters that control configuration info.

The Puppet Enterprise product offers the following capabilities:

  • Orchestration
  • Automated provisioning
  • Configuration Automation
  • Visualization and reporting
  • Code management
  • Node management
  • Role-based access control

Pros:

  • Strong compliance automation and reporting tools.
  • Active community support around development tools and cookbooks.
  • Intuitive web UI to take care of many tasks, including reporting and real-time node management.
  • Robust, native capability to work with shell-level constructs.
  • Initial setup is smooth and supports a variety of OSs.
  • Particularly useful, stable and mature solution for large enterprises with adequate DevOps skill resources to manage a heterogeneous infrastructure.

Cons:

  • Can be difficult for new users who must learn Puppet DSL or Ruby, as advanced tasks usually require input from CLI.
  • Installation process lacks adequate error reporting capabilities.
  • Not the best solution available to scale deployments. The DSL code can grow large and complicated at a higher scale.
  • Using multiple masters complicates the management process. Remote execution can become challenging.
  • Support is more focused toward Puppet DSL over pure Ruby versions.
  • Lacks push system, so no immediate action on changes. The pulling process follows a specified schedule for tasks.

How Puppet Works

[embedyt]https://www.youtube.com/watch?v=lxJQX2ipliY[/embedyt]

Puppet works on the master-slave relationship. There will be master which handle all the changes and logging.Every client/machine have the puppet agent which are slaves.

In general, there is two type of master-slave relationship.Pull based and push-based architecture.Below are the images which will explain this architecture.

In Push configuration, the centralized server will push the changes or any action to all the nodes, but in pull based configuration node will as for new changes, and then get the changes from the centralized server.

Puppet is written in Ruby. It is available in enterprise version also.From version 2.0 it is available under Apache License.

Master server contains the manifest file, where with the Puppet declarative language/Ruby  DSL we have to write the task we have have to perform.

There are resources[services, packages etc] in Puppet, we have to define these resources in the manifest .Group of the resource is called as the class to logically combined the resources.

We often have multiple modules to logically group the manifest file.

Puppet has multiple masters to handle the failure conditions.All the agents have to sign the certificate and all change of information is over SSH , so every exchange is fully protected and authenticated.

Below is the video from Edureka , which explain and demonstrate how it works.

Alternatives of Puppet

  • Chef – Push Based master-slave architecture
  • Ansible – Push Based master-slave architecture
  • Salt Stack – Pull Based master-slave architecture

MicroService – Brief Introduction

This post will focus on what Microservices are, why is it so famous these days, what are the positive and negative aspects of these services and what all area we will try to cover in future posts.

In addition, will share few youtube links, which are quite helpful to understand this concept. As a developer, I will say it is just one of the way how well we are packaging or modularizing code.

First of all, I would like to tell you, that I am inclined toward MicroServices, so most the things you find here will kind be in favor of these.But I will discuss all the challenges which you might face when you try to follow the awesome journey of MicroServices.

What is Micro Service?

Rather than going for a definition we will try to find the what are the common characteristics of Mirco Services.

In simple terms, you can think of a very small independent project capable of performing all the task.So If in a monolith you have various responsibilities, let each of this responsibility as a separate service.What responsibility does not have any concrete boundary. So this lets to the rise of the new question, how big or how small the micro services should be.Some people says it should be small enough to be handled by one developer, some say it should not be more than then few hundred line of code.To solve this, we will term service as a micro service if they have few of the properties/characteristics mention below.

Characteristics of MicroServices

  • Can be upgraded or rewrite independently.
  • Have fault tolerance and monitoring mechanism.
  • Each service is a complete Product.
  • Should have their own Data Management.
  • Should be easily replacable.
  • Should only expose the endpoint to dependendant services.

 

What a fuss is this, and Why is it becoming so popular these days?

As we all know that it is simple to solve number of small problems and then join all of them to solve the bigger problem.What we call is devide and conqure. Only problem in this approch is we need a very good merger technique to have a successful solution.With Devops of today(Docker,Kubernets,Mesos) it have been made possible to developer to manage large number of services and there deployment.Even it has helped to increase the resource utilization , which have further decreased the cost of maitaining multiple services.

Pros

  • It break the problem into smaller problem, helping developer to solve it more accuratley and in most optimized manner.
  • Partial Deployment and Partial Upgradation of Application is possible.
  • Help to reduce the develoment time.
  • We can easily rewite any service .
  • Services can be written in any programming languages , and we can easily try new progamming language.
  • Easier to find and fix bottleneck in the system.
  • Maintainance and Bug fix can be simper.
  • Test scope increases as we have to test smaller units independently.
  • CI/CD can be implemented easily.
  • Preserve modularity.

Cons

  • It increases the Devops activities.
  • Organization have to monotor and handle large number of sevices.
  • Service Discovery and trackng of request can be tidious task.
  • Need change in Oraganization culture.
  • Developer have to work more closely with devops , in order to make the development more stream line.
  • Need advance Devops.
  • Since it is kinda a new, not every one have clear idea what is a baoundary of MiroService.

Some Great Videos to give more insight.

 

Topics to be cover in upcoming blogs

  • Modules vs Microservices
  • How to share domain between different micro services
  • Should we have a common DAO[Database ] layer for every service or each service should have their own DAO layer?
  • All micro services in single Git Repo or they should have separate Git Repo
  • How to do monitoring of Micro Services.
  • How to track the flow of any particular request in real time between services.
  • Containerization of micro services.
  • Creating an environment of  Containerization service, which is self-deployed.
  • When to go for Micro Services.
  • How to shift from Monolith to Microservices.
  • How to perform Integration testing involving couple of Microservices as dependencies.
  • Distributed logging for Services.