Some Use cases of Annotations

In our previous post, we understand what annotation is, and how to create our custom annotations. Now we will try to understand how and when we can use annotation.

This is more sort of the suggestion rather than any rule of using experience.All this will be from my experience, it can change from person to person.

The common examples where I love to use annotations are:

Separating and Categorization
Every one of us loves to code and to protect our code we write test cases. We all are familiar with unit test cases and integration test cases. To reduce the build in time we always want to run these test cases in parallel.But to run them in parallel we first have to categorize these in two different buckets. we need some mechanism to identify what test cases are unit test cases and what are integration test cases.
For this, we will try very simple use case. We will try to use annotation to identify our type of test cases.We will create a simple spring application with Gradle, which have few unit test cases and integration test cases.
You can access the code base directly from GIT Repo.
  • First, create a simple spring-boot application with Gradle.
  • Create two interfaces “UnitTest” and “IntegrationTest

 

package com.application
public interface UnitTest{}

 

package com.application
public interface IntegrationTests{}

  • Create Gradle task to run different type of test cases

Gradle tasks

test {
    useJUnit {
        includeCategories 'com.application.UnitTest'
    }
}

task integrationTest(type: Test) {
    useJUnit {
        includeCategories 'com.application.IntegrationTest'
    }
}

 

All you need to do is put the annotation on your test case now.


@Category(IntegrationTest.class)
public class IntegrationTestCaseExample {
}

 


@Category(value = UnitTest.class)
public class UnitTestCaseExample {
}

Now all you need to do is run the gradle test if running unit test cases and gradle integrationTest if have to run integration test.

 

Sample code for the same can be seen at this link

For further usage of Annotation,  will try to write few more blogs

Advertisements

Java Annotations

Java is one of my favorite languages. It is simple. But some of its features are still not used very commonly. One of them is Annotations. Annotation is very simple and one of the powerful concept of java. If you have worked the on framework like spring , you may have seen there are Annotation to do most of the setting.

Though we have used annotation in many ways , still I have not seen my developer going for the creation of their custom annotation. This blog helps us to understand when and where we can utilize the power of annotation.

What is Annotation?

import javax.annotation.PostConstruct;
import javax.validation.constraints.Max;

@Deprecated
public class Foo {

@Max(10)
int foo = 0;

@PostConstruct
private void setFoo() {
foo = 4;
}
@Deprecated
int returnFooCount() {
return foo;
}
}

Annotation is a tag which can have metadata that can be attached to class, interface, methods or fields to indicate some additional information which can be used by Java compiler and JVM.

Example:- @Deprecated,@Override,@PostConstruct are some of the in build java annotation which we have been using.Here you have seen we have added the annotation at class , at attribute and even at method.

Why Annotations?

Prior to Annotations, XML was often used to have the metadata. The Large set of Architectures and developers often believe that XML maintenance is difficult and it is not very easy to read. Something was needed which should be more close to code.Annotations are easy to understand and more readable.

Uses Of Annotations

  • Information for the compiler — Annotations can be used by the compiler to detect errors or suppress warnings or to give some information to the compiler.[eg- @Serialization]
  • Compile-time and deployment-time processing — Software tools can process annotation information to generate code, XML files, and so forth.[eg -lombok project – @Data,@Builder]
  • Runtime processing — Some annotations are available to be examined at runtime.[ It can be used as point cut in Aspect Oriented Programming]
  • Comments — Often people use annotation to provide comments, why I prefer annotations over the comment as it provides the structural way of commenting.[@Deprecated, any other custom annotation]

How Annotation Works?

Before using any concept we should understand how it works internally, only then we can explore the better way of using it and can use it properly.As we all know annotation does not contain any business knowledge.Then how it actually works?

When Java source code is compiled, annotations can be processed by compiler plug-ins called annotation processors. Processors can produce informational messages or create additional Java source files or resources, which in turn may be compiled and processed, and also modify the annotated code itself. The Java compiler conditionally stores annotation metadata in the class files, if the annotation has a RetentionPolicy of CLASS or RUNTIME(We will cover more about retention policy in further section). Later, the JVM or other programs can look for the metadata to determine how to interact with the program elements or change their behavior.

In addition to processing an annotation using an annotation processor, a Java programmer can write their own code that uses reflections to process the annotation

This package contains the interface called AnnotatedElement that is implemented by the Java reflection classes including Class, Constructor, Field, Method, and Package. The implementations of this interface are used to represent an annotated element of the program currently running in the Java Virtual Machine. This interface allows annotations to be read reflectively.

The AnnotatedElement interface provides access to annotations having RUNTIME retention. This access is provided by the getAnnotation, getAnnotations, and isAnnotationPresentmethods. Because annotation types are compiled and stored in byte code files just like classes, the annotations returned by these methods can be queried just like any regular Java object.

How to Build Custom Annotation?

Below is the simplest Annotation one can create.


public @interface CustomAnnotation {
}

 

Annotations itself can have some annotations, which tell us something about our new annotation.

  • Retention – It tells java how long any annotation will be retained during the different phase of code. Retention annotation have property RetentionPolicy which have 3 possible values:-
    • Source: Annotations are to be discarded by the compiler.
    • Class: Annotations are to be recorded in the class file by the compiler but need not be retained by the VM at run time.[DEFAULT]
    • Runtime: Annotations are to be recorded in the class file by the compiler and retained by the VM at run time so they may be read reflectively.

@Retention(RetentionPolicy.RUNTIME)
public @interface CustomAnnotation {
}

Note-A Retention meta-annotation has effect only if the meta-annotated type is used directly for annotation. It has no effect if the meta-annotated type is used as a member type in another annotation type.

  • Target: As we know we can put annotation at many different levels, at class, at attribute and many other. But as a developer, you might want to bound the use of your annotation at some level only, as your annotation can make sense some particular level.To declare the level this annotation is used.It has an attribute array of element type which can have all the possible level. There are different type elements in java.Most of them are self-explanatory.
    • Element Type
      • TYPE
      • FIELD
      • METHOD
      • PARAMETER
      • CONSTRUCTOR
      • LOCAL_VARIABLE
      • ANNOTATION_TYPE
      • PACKAGE
      • TYPE_PARAMETER [new in java 1.8]
        • eg  <T extends @A Object & @B Comparable, U extends @C Cloneable>
      • TYPE_USE [new in java 1.8] eg –
        • Map<@NonNull String, @NonEmpty List<@Readonly Document>> files;
  • Inherited:- If this annotation is present it will be inherited to the subclass, By default it is not present. It will only be effective when the annotation is given at class level.
  • Documented:-  Marks the annotation for inclusion in the documentation. Indicates that annotations with a type are to be documented by javadoc and similar tools by default.
  • Repeatable: Prior to Java 8 user were not allowed to give same anntation multiple times at same place.But If any annotation have repeatable annotation that can be put multiple time at same place.

Example

 

@Retention( RetentionPolicy.RUNTIME )
public @interface Cars {
       Manufacturer[] value() default{};
 }

@Repeatable(value = Cars.class )
public @interface Manufacturer {
String value();
};

@Manufacturer( "Mercedes Benz")
@Manufacturer( "Toyota")
@Manufacturer( "BMW")
@Manufacturer( "Range Rover")
public interface Car { 

}

Now we can give Manufacturer annotation multiple times to interface Car. Manufacturer annotation contains the Repeatable annotation which contains some other annotation as its value.

 

 

Now we know all the attribute of annotation, we can create annotations.


@Retention(RetentionPolicy.RUNTIME)
@Documented
@Target(value = {ElementType.TYPE})
public @interface CustomAnnotation {
String name();
String identifier() default "";
}

 

In our custom annotation, we have two properties name and identifier. A value doesn’t have any default property, so whenever we use this  CustomAnnotation it is mandated to give value attribute some value, otherwise, it gives error.On the other hand, identifier contain some default value, so we don’t have to compulsory give its value.

 

So, till now we have some basic idea how annotation work, and how to create an annotation.In the further blog, we will see the different situations where we can create our own custom annotations, and how to tell JVM to process these annotations.

Some Useful In-Build Annotations in Java

 

  • @GuardedBy
  • @Immutable
  • @NotThreadSafe
  • @ThreadSafe
  • @SuppressWarnings

Some Uses on Annotations

 

 


Useful Links

Difference between Dependency Management and Dependencies in Maven

Maven is a software management tool, use to manage information, dependencies and other things for a project.

It has two mechanisms to add the dependencies of other modules/project. One is Dependencies tag and other is Dependency Managment. People often wonder whats a difference between the two. An important question is when to use what?

First, of all, we should have an idea of what is multi module applications.As in the case in case of multi module applications only they differ.

A multi-module project is, as its name suggests, a project that consists of multiple modules, where a module is a project. You can think of a multi-module project as a logical grouping of sub-projects. The packaging of a multi-module project is “pom” since it consists of a pom.xml file but no artifact (jar, war, ear, etc).

multimodule-web-spring_projects

That’s a technical story. But it is not great if we are trying to teach some one. As concepts should be as simple as a story for a 12-year-old child. So let’s understand dependencies and dependency management.

There is a man called Peter, he has two ice cream parlors, Gelatos, and Baskin Robbins. Peter has two children Ron and Seria.

Peter has 2 ice cream in gelatos ice cream parlor – mango[ basic version ] and straberry[ moderate version] and 1 ice cream in Baskin Robbins – black current[high version].

Both Ron and Seria can have all the ice creams which their father has in Gelatos parlor, but they can have ice from Baskin Robbins only if they ask for it.Which means they can only have black current if they ask for specific it, but they don’t have to mention which version of black current they need, as their father Peter already know which kind of black current ice we have.

Here Peter is parent module and,  Ron and Seria are child modules.Gelatos is dependencies tag and Baskin Robin is Dependency Management tag.

screen-shot-2015-06-28-at-18-18-31-480x275

So all the dependencies present in dependencies tag will be available to all its child modules.But all the dependencies present in dependency management of parent module will be available to the child only if those dependencies are declared in dependencies tag of child module.

So why are we even using dependency management if dependencies tag passes all the dependency to its children?

  1. As all the children might not need all the dependencies present in parent module, so it is always wise to use Dependency Managment.
  2. Using Dependency Managment we can create a consistency of which version and tag of any artifact which we are using through out the application.[It help to maintain the version of artifact]
  3. Dependency Managment manage version, scope, and exclusion of artifact in child modules.

Example –

Parent POM- A

</pre>
<pre><?xml version="1.0" encoding="UTF-8"?>
<project xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xmlns="http://maven.apache.org/POM/4.0.0"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <groupId>com.test</groupId>
    <artifactId>A</artifactId>
    <packaging>pom</packaging>
    <version>1.0-SNAPSHOT</version>
    <modules>
        <module>B</module>
        <module>C</module>
    </modules>

    <dependencies>
            <dependency>
            <groupId>com.external</groupId>
            <artifactId>d1</artifactId>
            <version>1</version>
        </dependency>
    </dependencies>
    <dependencyManagement>
        <dependencies>
            <dependency>
                <groupId>com.external</groupId>
                <artifactId>d2</artifactId>
                <version>1</version>
            </dependency>
            <dependency>
                <groupId>com.external</groupId>
                <artifactId>d3</artifactId>
                <version>1</version>
            </dependency>
        </dependencies>

    </dependencyManagement>
</project></pre>
<pre>

It has 2 child module A and B. Parent A has 3 dependencies in total:-

  • d1 [Inside dependencies tag]
  • d2 [Inside dependency management tag]
  • d3 [Inside dependency management tag]

 

 Child Pom B

</pre>
<pre><?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <parent>
        <groupId>com.test</groupId>
        <artifactId>A</artifactId>
        <version>1.0-SNAPSHOT</version>
    </parent>
    <modelVersion>4.0.0</modelVersion>
    <groupId>com.test</groupId>
    <artifactId>B</artifactId>
    <packaging>pom</packaging>

    <dependencies>
        <dependency>
            <groupId>com.external</groupId>
            <artifactId>d2</artifactId>
        </dependency>
        <dependency>
            <groupId>com.external</groupId>
            <artifactId>d4</artifactId>
            <version>1</version>
        </dependency>
    </dependencies>
</project></pre>
<pre>

It will have access to 3 artifacts

  • d1 [coming from dependencies of parent ]
  • d2 [coming from  parent dependency management as mention in its dependencies tag]
  • d4 [coming from its dependencies tag]

Note:- It will not have the d3 artifact, as d3 is mention dependency management tag of parent pom [POM A] but not present in child pom [POM B].

Child Pom C

</pre>
<pre><?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <parent>
        <groupId>com.test</groupId>
        <artifactId>A</artifactId>
        <version>1.0-SNAPSHOT</version>
    </parent>
    <modelVersion>4.0.0</modelVersion>
    <groupId>com.test</groupId>
    <artifactId>C</artifactId>
    <packaging>pom</packaging>

    <dependencies>
        <dependency>
            <groupId>com.external</groupId>
            <artifactId>d3</artifactId>
        </dependency>
        <dependency>
            <groupId>com.external</groupId>
            <artifactId>d5</artifactId>
            <version>1</version>
        </dependency>
    </dependencies>
</project></pre>
<pre>

Similarly, It will have access to 3 artifacts

  • d1 [coming from dependencies of parent ]
  • d3 [coming from  parent dependency management as mention in its dependencies tag]
  • d5 [coming from its dependencies tag]

Note:- It will not have the d3 artifact, as d3 is mention dependency management tag of parent pom [POM A] but not present in child pom [POM B].


 

Further Reading – http://maven.apache.org/guides/introduction/introduction-to-dependency-mechanism.html

 

Add Google Authentication using Firebase in React+Redux Application

Single page applications(SPA) are quite famous these days, they are easy to build thanks to all the available libraries and framework. Angular (by Google) and React(by Facebook) are the most famous options available to explore these days. Because of these, front-end applications are now easy to manage and maintain. But even if we are able to create this SPA using all the technologies, we still need some server side logic to persist our data, and most importantly we need authentication so that each user can perform an action in the scope they are authorized to do.

Below is the diagram which shows the authentication flow.

oauth_implicit

Quick Intro of Firebase

Formerly known as Google Cloud Messaging (GCM), Firebase Cloud Messaging (FCM) is a cross-platform solution for messages and notifications for Android, iOS, and web applications, which currently can be used at no cost.

Today we will try to build a simple SPA using React, Redux, and Firebase(it will provide the google authentication)

Prerequisite-

Installed Software

We will follow the https://github.com/jainamit333/react_google_authetication to walk through the development.

Branch Name:- vanilla

Create a directory name react-authentication

 mkdir google-authentication-react
 cd google-authentication-react

 

Create Directory Structure as follows

Screen Shot 2017-08-05 at 2.00.09 PM

Add following code to package.json 

{
"name": "google-authentication-react",
"version": "0.1.0",
"private": true,
"scripts": {
"start": "npm run build; node server/index.js",
"start-dev": "nodemon server/index.js",
"build": "webpack -p",
"build-dev": "webpack -w",
"build-sass": "node-sass -w ./client/styles/main.scss -o ./client/styles/mainSheet",
"test": "echo \"Error: no test specified\" &amp;amp;amp;amp;amp;amp;&amp;amp;amp;amp;amp;amp; exit 1",
"stats": "webpack --env production --profile --json &amp;amp;amp;amp;amp;gt; stats.json"
},
"dependencies": {
"axios": "^0.16.1",
"babel": "^6.5.2",
"babel-core": "^6.18.0",
"babel-loader": "^6.2.7",
"babel-preset-es2015": "^6.18.0",
"babel-preset-react": "^6.16.0",
"babel-preset-stage-2": "^6.24.1",
"body-parser": "^1.17.1",
"bootstrap": "^4.0.0-alpha.6",
"css-loader": "^0.28.0",
"express": "^4.15.2",
"firebase": "^4.2.0",
"muicss": "^0.9.20",
"node-sass": "^4.5.2",
"react": "^15.6.1",
"react-addons-css-transition-group": "^15.6.0",
"react-addons-transition-group": "^15.6.0",
"react-dom": "^15.6.1",
"react-redux": "^5.0.4",
"react-router": "^3.0.0",
"react-router-dom": "^4.1.2",
"reactstrap": "^4.8.0",
"redux": "^3.6.0",
"redux-logger": "^3.0.1",
"redux-thunk": "^2.2.0",
"sass-loader": "^6.0.3",
"style-loader": "^0.16.1",
"volleyball": "^1.4.1",
"webpack": "^2.7.0",
"webpack-livereload-plugin": "^0.10.0"
},
"devDependencies": {
"chai": "^3.5.0",
"cross-env": "^3.1.4",
"expose-loader": "^0.7.3",
"mocha": "^3.1.2",
"nodemon": "^1.11.0",
"react-hot-loader": "^1.3.1",
"supertest": "^2.0.1",
"supertest-as-promised": "^4.0.1",
"webpack-dashboard": "^0.4.0",
"webpack-dev-server": "^2.6.1"
}
}

At line number 26 we have added the dependency for firebase.

Note:- This package.json may contain many extra dependencies as I have extracted it from my other project just for tutorial purpose.

Install all the added dependencies

npm install

Add web pack Config code


const path = require('path');
const LiveReloadPlugin = require('webpack-livereload-plugin');
const webpack = require('webpack');

module.exports = {
entry: './client/index.js',
output: {
filename: 'bundle.js',
path: path.resolve(__dirname, 'client/dist')
},
context: __dirname,
resolve: {
extensions: ['.js', '.jsx', '.json', '*']
},
devtool:'cheap-module-source-map',
devServer: {
inline: true,
contentBase: './dist',
port: 3001
},
module: {
rules: [{
test: /\.jsx?$/,
exclude: /(node_modules|bower_components)/,
loader: 'babel-loader',
options: {
presets: ['react', 'es2015','stage-2']
}
},
{
test: /\.scss$/,
use: [
'style-loader',
'css-loader',
'sass-loader'
]
}]
},
plugins: [
new webpack.DefinePlugin({

'process.env.COSMIC_BUCKET': JSON.stringify(process.env.COSMIC_BUCKET),
'process.env.COSMIC_READ_KEY': JSON.stringify(process.env.COSMIC_READ_KEY),
'process.env.COSMIC_WRITE_KEY': JSON.stringify(process.env.COSMIC_WRITE_KEY)
}),
new LiveReloadPlugin({appendScriptTag: true})

]
};

 

React and Redux related files

  • server/index.js
&amp;amp;amp;amp;lt;pre&amp;amp;amp;amp;gt;const express = require('express');
const app = express();
const path = require('path');
const volleyball = require('volleyball');
app.use(volleyball);
//serve up static files
app.use(express.static(path.resolve(__dirname, '..', 'client')));
app.use(express.static(path.resolve(__dirname, '..', 'node_modules')));
app.use(function (err, req, res, next) {
    console.error(err);
    console.error(err.stack);
    res.status(err.status || 500).send(err.message || 'Internal server error.');
});

// handle every other route with index.html, which will contain
// a script tag to our application's JavaScript file(s).
app.get('*', function (request, response) {
    response.sendFile(path.resolve(__dirname, '..', 'client', 'index.html'))
});
//listen on port 3000
app.listen(process.env.PORT || 3000, function () {
    console.log("Rockin' out on port 3000 homie");
});&amp;amp;amp;amp;lt;/pre&amp;amp;amp;amp;gt;

We are starting the project on port 3000.

  • redux/actions/actions.js
var constants = {
START_AUTHENTICATING:'START_AUTHENTICATING',
AUTHENTICATION_SUCCESSFUL:'AUTHENTICATION_SUCCESSFUL',
AUTHENTICATION_FAILED:'AUTHENTICATION_FAILED',
ALREADY_LOGIN:'ALREADY_LOGIN',
LOGOUT:'LOGOUT',
LOGOUT_SUCCESSFUL:'LOGOUT_SUCCESSFUL',
LOGOUT_ERROR:'LOGOUT_ERROR',
}

export default constants;

We have created constants for actions which will be supported for out dummy application

  • redux/actions/auth.js
import constant from './actions'

export const startAuth = keyWord =&amp;amp;amp;gt; {

return {
type: constant.START_AUTHENTICATING,
authenticated:false,
authenticating:true
}
}

export const alreadyLogin = response =&amp;amp;amp;gt; {

return{
type:constant.ALREADY_LOGIN,
authenticated:true,
authenticating:false,
user:response

}
}

export const authError = error =&amp;amp;amp;gt; {

return {
type: constant.AUTHENTICATION_FAILED,
authenticated:false,
authenticating:false,
error
}
}

export const authSuccess = response =&amp;amp;amp;gt; {
return {
type: constant.AUTHENTICATION_SUCCESSFUL,
user:response,
authenticated:true,
authenticating:false
}
}

These are the various activities that will be spawn during our login lifecycle.
We usually create a different actions file for a different piece of flow.

  • redux/reducers/auth.js
import constants from '../actions/actions'

const auth = (state = [], action) => {
switch (action.type) {

case constants.START_AUTHENTICATING:
return Object.assign({}, state, {
authenticated: false,
authenticating: true,
user: action.user
}, ...state)
break;
case constants.AUTHENTICATION_FAILED:
return Object.assign({}, state, {
authenticated: false,
authenticating: false,
user: {}
}, ...state)
break;
case constants.AUTHENTICATION_SUCCESSFUL:
return Object.assign({}, state, {
authenticated: true,
authenticating: false,
user: action.user
}, ...state)
break;

case constants.ALREADY_LOGIN:
return Object.assign({}, state, {
authenticated: true,
authenticating: false,
user: action.user
}, ...state)
break;
default:
return state
}

}
export default auth

As you can see, on the success we are setting user and setting param authenticated and authenticating accordingly in every case.
Other two param are mostly if we want to add loading bar during the login process.

 

  • redux/reducer.js
import {combineReducers} from "redux";
import auth from "./reducers/auth";

const googleBooks = combineReducers({
auth
})

export default googleBooks

This is the place where we combine all our reducer.
For now, we have only one main reducer which we are adding in line 5.

redux/store.js

import { createStore, applyMiddleware } from 'redux';
import reducer from './reducer';
import thunk from 'redux-thunk';
import {createLogger} from 'redux-logger';

const initialState = {
auth:{
authenticated:false,
authenticating:false,
user:{},
}
}

const store = createStore(
reducer,
initialState,
applyMiddleware(
createLogger(),
thunk
)

);
export default store;

We are creating a redux store and adding our combined reducer in line 15.

  • client/index.html
</pre>
<pre><!DOCTYPE html>
<html>
<head>
    <meta charset="UTF-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1" />
    <title>Books Around You</title>
    			<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css">
    <script src="https://code.jquery.com/jquery-3.2.1.min.js"
            integrity="sha256-hwg4gsxgFZhOsEEamdOYGBf13FyQuiTwlAQgxVSNgt4="
            crossorigin="anonymous"></script>

    <script src="/dist/bundle.js" defer></script>
</head>
<body>


<div id="root"></div>


</body>
</html></pre>
<pre>

Creating a placeholder for all our react component

  • client/index.js
</pre>
<pre>import React from 'react';
import ReactDOM from 'react-dom';
import {Provider} from 'react-redux';
import store from '../redux/store';
import Dashboard from "./components/Dashboard";

ReactDOM.render(
    <Provider store={store}>
        <Dashboard />
    </Provider>,
    document.getElementById('root')
);</pre>
<pre>

we are adding dashboard component as out main component which will be added to root div

  • client/component/Dashboard.js
</pre>
<pre>import React, {Component} from "react";
import {connect} from "react-redux";
import Navigation from "./Navigation";
import {login} from "../../services/firebase/auth";
import UserInfoPanel from "./UserInfoPanel";

class Dashboard extends Component {

    componentDidMount() {
        login()
    }

    render() {
        return (


<div>
                <Navigation />


<div className="mui-row">
</div>



<div className="mui-row">
                    <UserInfoPanel/>
</div>


</div>


        )
    }
}

function mapStateToProp(state) {
    return state;
}

export default connect(mapStateToProp)(Dashboard)</pre>
<pre>

We are calling login method on component did mount.
So it will check if we are already login or not before mounting this component

  • client/component/Navigation.js
</pre>
<pre>import React from "react";
import {connect} from "react-redux";
import {login, logout} from "../../services/firebase/auth";

class Navigation extends React.Component {

    render() {

        var styles = {

            marginTop:{
                marginTop:'10px'
            },
            baseColor:{
                color:'#a83808'
            }

        }
        return (
            <nav className="navbar navbar-default">


<div className="container-fluid">


<div className="navbar-header">
                        <a className="navbar-brand" href="#" >
                            <span  className="glyphicon glyphicon-bishop " aria-hidden="true" style={styles.baseColor}></span>
                        </a>
</div>



<ul className="nav navbar-nav navbar-right ">


	<li>
                            {this.props.authenticated && <span style={styles.marginTop} onClick={logout} className="btn btn-sm btn-danger">Logout</span>}
                            {!this.props.authenticated && <span style={styles.marginTop} onClick={login(this.props.dispatch)} className="btn btn-sm btn-danger">Login</span>}
</li>


</ul>


</div>


            </nav>
        );
    }
}

function mapStateToProps(state) {
    return {
        authenticated: state.auth.authenticated,
    }
}

export default connect(mapStateToProps)(Navigation)</pre>
<pre>

If the user is login LOGOUT button will render another wise LOGIN button.

  • client/component/UserInfoPanel.js
</pre>
<pre>import React from 'react';
import {connect} from "react-redux";

class UserInfoPanel extends React.Component {

    render() {
        const styles = {
            card:{
                width: '20em',
                position: 'relative',
                display: 'flex',
                flexDirection:'column',
                backgroundColor: '#fff',
                border: '1px solid rgba(0,0,0,.125)',
                borderRadius: '.25rem',
                padding:'3px',
                margin:'2px'
            }
        }
        return (


<div className="card" style={styles.card}>
                { this.props.authenticated && <img className="card-img-top img-thumbnail" src = {this.props.user.photoURL} alt="Card image cap"/> }
                { this.props.authenticated &&


<div className="card-block">


<h4 className="card-title">{this.props.user.displayName}</h4>


{this.props.user.email}

</div>


                }
</div>


        );
    }
}

function mapStateToProps(state) {
    return {

        authenticated: state.auth.authenticated,
        authenticating: state.auth.authenticating,
        user: state.auth.user
    }
}

export default connect(mapStateToProps)(UserInfoPanel)</pre>
<pre>

This component is to show the information of the login user.
In highlighted part, you can check we are mapping state param to props of the component.
This is a connected component.

Firebase related Files

  • services/firebase/config.js
import firebase from 'firebase'

const config = {

apiKey: "<your api key from google developer console.>",
authDomain: "<auth domain from firebase project eg >",
databaseURL: "<databse url from firebase>",
storageBucket: "<storage bucket from firebase>",

}

firebase.initializeApp(config);
export const provider = new firebase.auth.GoogleAuthProvider();
provider.addScope('https://www.googleapis.com/auth/plus.login')
export const firebaseAuth = firebase.auth

Replace the config part from your configs.
On line 12 we are initializing firebase.
On line, we are creating a google authentication provider.
Line no 14 states the scope of google api we are using, for now, we only need plus.login,
as we are using only from authentication.
When you need some other access also add other scopes to the same provider in next line.

  • services/firebase/auth.js
import { firebaseAuth, provider} from './config'
import {alreadyLogin, authError, authSuccess, startAuth} from "../../redux/actions/auth";
import store from '../../redux/store'

export function logout () {

return firebaseAuth().signOut()
}

function doLogin() {

firebaseAuth().signInWithPopup(provider).then(function(result) {
store.dispatch(authSuccess(result.user))

}).catch(function(error) {
store.dispatch(authError(error))
});
}

export function login () {

firebaseAuth().onAuthStateChanged((response) => {
if(response){
store.dispatch(alreadyLogin(response))
}else{
store.dispatch(startAuth())
doLogin()
}
});
return
}

In login method we are first checking of user is already login or not.
If not it will called do login method otherwise, it will called the action login already.

In doLogin method we are login usingn google provider.
If it is successful we are calling authSuccess action otherwise
authError action.

NOTE:- Since we are using Google authentication from firebase, it has to be enabled in firebase.Inside your firebase project go to authentication, go to sign-in-method and enable google provied.

Different dropout in Tensorflow

Dropout is a regularization technique for reducing overfitting in neural networks by preventing complex co-adaptations on training data. It is a very efficient way of performing model averaging with neural networks.

neural-networks-in-the-wild-handwriting-recognition-29-638

Why Dropout: Dropout helps prevent weights from converging to identical positions. It does this by randomly turning nodes off when forward propagating.

In simple terms some of your neurons will not participate in the calculation.

In Tensorflow we have two dropout functions.

  • tf.nn.dropout
    • It has parameter  keep_prop which state the probability of neuron which will not be drop. If we dive keep_prop value to 0.6, it means 60% of neurons will remain and 40% will be drop.
  • tf.layers.dropout
    • It has two main parameters, rate, and training. Rate means the number of neurons which will be drop.If the rate is 0.6 then 60% neurons will be dropped and 40% will be used.
    • We can see keep_prop = 1- rate.
    • Training parameter is to differentiate if the network is running for training or to get the result.We need dropout only when we are training the neural network, not while we are testing(inferencing) it.

So what is the difference between these two functions?

tf.layers.dropout is a wrapper over the tf.nn.droput, which give us the demarcation of whether to return the output in training mode (apply dropout) or in inference mode (return the input untouched).”

 

What Yagni Is?

Often developer debates a lot about what YAGNI is actually. Writing any new piece of code bind developer’s hand in the name of YAGNI.

I have worked in both product base company and service based company.I personally feel that YAGNI is more appreciated in the service based company, as in product based any extra feature is helpful.

People back YAGNI by saying “You don’t wipe before you shit.”.So true.We should not increase the scope of the problem as it may open the gates of new bugs.So YAGNI tells us “don’t write any extra/new code unless it is actually required”.

I agree with the last quoted statement.But should we bind our hands in implementing the requirement in a fashion that it should always be open for extensibility, or to write any functionality in a more generic way so that it can be reused any time in near future?

So, what my definition of YAGNI is: – Always write the code in the scope of the current requirement, but don’t cry out YAGNI if you want to solve it in the generic or extensible way.The code should close for modification but open for extensibility.

Try to solve problems in generic ways, that can be used in future.

Difference Between Generative and Discriminative machine learning

To understand these two models we first have to see what is the difference between joint probability [P(x,y)] and conditional probability[P(x|y)].

Joint probability:  p(A and B).  The probability of event A and event B occurring.  It is the probability of the intersection of two or more events.  The probability of the intersection of A and B may be written p(A ∩ B). Example:  the probability that a card is a four and red =p(four and red) = 2/52=1/26.  (There are two red fours in a deck of 52, the 4 of hearts and the 4 of diamonds).

Conditional probability:  p(A|B) is the probability of event An occurring, given that event B occurs. Example:  given that you drew a red card, what’s the probability that it’s a four (p(four|red))=2/26=1/13.  So out of the 26 red cards (given a red card), there are two fours so 2/26=1/13.

For better understanding, click here for more on probability.

generative algorithm models how the data was generated in order to categorize a signal. It asks the question: based on my generation assumptions, which category is most likely to generate this signal? Let’s say you have input data x and you want to classify the data into labels y. A generative model learns the joint probability distribution p(x,y). A generative algorithm models how the data was “generated”, so you ask it “what’s the likelihood this or that class generated this instance?” and pick the one with the better probability.

discriminative algorithm does not care about how the data was generated, it simply categorizes a given signal. Discriminative model learns the conditional probability distribution p(y|x) – which you should read as the probability of y given x. A discriminative algorithm uses the data to create a decision boundary, so you ask it “what side of the decision boundary is this instance on?

The fundamental difference between discriminative models and generative models is:

  • Discriminative models learn the (hard or soft) boundary between classes
  • Generative models model the distribution of individual classes

Given input data point x, the aim is to predict continuous (regression) or discrete (classification) output. That is given x, we are interested in modeling p(y|x). There are three approaches to this:

1. Generative Models:
One way is to model p(x, y) directly. Once we do that, we can obtain p(y|x) by simply conditioning on x. And we can then use decision theory to determine class membership i.e. we can use loss matrix, etc. to determine which class the point belongs to (such an assignment would minimize the expected loss). For e.g. in Naive Bayes model, you can learn p(y), the prior class probabilities from the data. You can also learn p(x|y) from the data using said maximum likelihood estimation (or you can Bayes estimator if you will). Once you have p(y) and p(x|y), p(x, y) is not difficult to find out.

2. Discriminative Models:
Instead of modeling p(x, y), we can directly model p(y|x), for e.g. in logistic regression p(y|x) is assumed to be of the form 1 / (1 + exp(-sigma(wi. xi))). All we have to do in such a case is to learn weights that would minimize the squared loss.

Generative models often outperform discriminative models on smaller datasets because their generative assumptions place some structure on your model that prevent overfitting. For example, let’s consider Naive Bayes vs. Logistic Regression. The Naive Bayes assumption is of course rarely satisfied, so logistic regression will tend to outperform Naive Bayes as your dataset grows (since it can capture dependencies that Naive Bayes can’t). But when you only have a small data set, logistic regression might pick up on spurious patterns that don’t really exist, so the Naive Bayes acts as a kind of regularizer on your model that prevents overfitting. There’s a paper by Andrew Ng and Michael Jordan on discriminative vs. generative classifiers that talks about this more.

Whenever an algorithm involves assuming, calculating or estimating the distribution of Y, it is generative, or simply put, if the algorithm cares about the distribution of Y, it is generative, if not, then it is discriminative.

Now a Small story to tell your 12-year-old kid, so that they can also understand the difference between these two models

Let’s say you have two kids “Gen” and “Dis”, and since their birth, they never opened their eyes. Today is the first day they will open their eyes, and you want to celebrate this occasion by teaching them the difference between Cat and Dog. You take them to pet store nearby.

Before showing around, you tell Gen and Dis to pay special attention to color, size, eye color, fur size, their voice etc.(feature set) of the pets they are going to see. After the end of this visit, you want to check if they understood the difference between cat and dog.

Now you give two photos one of a cat and one of a dog to Dis and ask which one is which. Dis has meticulously written down several conditions like if the voice sounds like meow and eyes are blue or green and has stripes with color brown or black then the animal is a cat. Thanks to her relatively simple rules, she quickly detected which one is a cat and which one is a dog.

Now instead of giving two photos you gave Gen two pieces of blank paper and ask her to draw what a cat and a dog looks like.

Well now, given any photo Gen can also tell which one is cat and which one is dog based on the drawing she created. In most cases drawing of cat and dog was unnecessary and time consuming for the task of detection which one is a cat.

But if there were only a few dogs and cats to look for Gen and Dis (low training data). In such cases if you give a photo of a brown dog with stripes with blue eyes, there is a chance that Dis would mark it as a cat. While Gen has her drawing and she can better detect that this photo is of a dog.

If you ask Gen to pay attention to more things(features), it will create a better sketch. But, if you show more examples(data-set) of cat and dog, Dis would mostly be better than Gen.

Since Dis is very meticulous in her observations if you ask her to pay attention to more features it will create more complicated rules(overfitting) and the chance of wrongly identifying a cat and a dog will increase, but that would not happen easily with Gen.

What if before going to pet store I don’t tell them that there are only two types of animal(no labeled data). Dis would fail completely because she will not know what to look for while Gen would be able to draw the sketch anyway. This is a huge advantage sometimes(semi-supervised).

Now let me reveal the suspense which you might already know: Dis is for discriminative and Gen is for generative.

Continue reading