Changing apache2 document root in ubuntu 14.x

Having the default Apache document root to /etc/www can be sometimes annoying because of permissions. I strongly recommend to use a folder into your home folder.

For the sake of this post, I will use the folder Sites (that I created under my personal folder /home/shprink/Sites/).

PS: The Apache version used for this post is: 2.4.10 (Ubuntu), you can see yours using apache2 -v command.

Changing apache2 document root

The default document root is set in the 000-default.conf file that is under /etc/apache2/sites-available folder.

$ cd /etc/apache2/sites-available
$ sudo nano 000-default.conf

While the file is opened change DocumentRoot /var/www/ with your new folder e.g DocumentRoot /home/shprink/Sites/

Set the right Apache configuration

The configuration of the /etc/www folder is under /etc/apache2/apache2.conf. Edit this file to add the configuration of your new document root.

$ sudo nano /etc/apache2/apache2.conf

Copy the following:

<Directory /var/www/>
       Options Indexes FollowSymLinks
       AllowOverride None
       Require all granted
</Directory>

and change the directory path:

<Directory /home/shprink/Sites/>
       Options Indexes FollowSymLinks
       AllowOverride None
       Require all granted
</Directory>

Restart Apache

$ sudo service apache2 restart

Set the right permission

All of your document root parent folders must be executable by everyone. To know if it is the case you can use the utility namei that list the permissions along each component of the path:

$ namei -m /home/shprink/Sites/
f: /home/shprink/Sites/
 drwxr-xr-x /
 drwxr-xr-x home
 drwx------ shprink
 drwx------ Sites

Here as you can see that shprink and Sites permissions are not set properly.

Open http://localhost/ in your browser, you should get the following message:

Forbidden
You don’t have permission to access / on this server.

Open the apache error log to see the exact error code e.g AH00035. It might help you to get more information.

$ sudo tail -f /var/log/apache2/error.log
[Mon Apr 06 09:04:26.518260 2015] [core:error] [pid 22139] (13)Permission denied: [client 127.0.0.1:45121] AH00035: access to / denied (filesystem path '/home/shprink/Sites') because search permissions are missing on a component of the path

To fix the permission problem for good, using chmod +755 should be enough.

$ chmod +755 /home/shprink/
$ chmod +755 /home/shprink/Sites/

Re run namei to make sure everything is ok.

$ namei -m ~/Sites/
f: /home/shprink/Sites/
 drwxr-xr-x /
 drwxr-xr-x home
 drwxr-xr-x shprink
 drwxr-xr-x Sites

Now opening http://localhost/ should work as expected. If you are having trouble please leave a comment.

Introduction to Webpack with practical examples

Webpack is taking the task automation market by a storm. I have been using it for months now and for most of my needs Webpack took over Grunt and Gulp.

Webpack is a module bundler, it takes modules with dependencies and generates static assets representing those modules.

This post will only focus about Webpack “loaders” and “post loaders”.

Webpack Loaders

Loaders are pieces of code that can be injected in the middle of the compilation stream. Post loaders are called at the end of the stream.

webpack can only process JavaScript natively, but loaders are used to transform other resources into JavaScript. By doing so, every resource forms a module.

Source available on Github

Prerequisite

You will need to have Node and Npm installed on your machine to go through the following examples.

Install

npm install webpack --save-dev

Now create a webpack.config.js file and dump this basic scaffolding in it:

var webpack = require('webpack'),
    path = require('path');

module.exports = {
    debug: true,
    entry: {
        main: './index.js'
    },
    output: {
        path: path.join(__dirname, 'dist'),
        filename: '[name].js'
    },
    module: {
        loaders: []
    }
};

You main entry point is now index.js at the root of your folder. The compiled file will be dumped in the dist folder when compiling.

Compile

When your entry point is defined you can start the compilation using the CLI:

# Debug mode
webpack

# Production mode (minified version)
webpack -p

ECMAScript 6 compilation

ECMAScript 6 introduce tones of new features (Arrows, Classes, Generators, Modules etc.) that can be used right now! To do so I recommend using Babeljs.

Installation:

npm install babel-loader --save-dev

Add the loader to the Webpack configuration:

loaders: [{
  test: /.es6.js$/,
  loader: "babel-loader"
}]

You now can require any ES6 modules using require('./src/index.es6.js');

Result

Before

// Generators
var fibonacci = {
    [Symbol.iterator]: function*() {
        var pre = 0,
            cur = 1;
        for (;;) {
            var temp = pre;
            pre = cur;
            cur += temp;
            yield cur;
        }
    }
}

module.exports = fibonacci;

After

"use strict";

var fibonacci = function() {
    var a = {};
    a[Symbol.iterator] = regeneratorRuntime.mark(function b() {
        var a, c, d;
        return regeneratorRuntime.wrap(function e(b) {
            while (1) switch (b.prev = b.next) {
              case 0:
                a = 0, c = 1;

              case 1:
                d = a;
                a = c;
                c += d;
                b.next = 6;
                return c;

              case 6:
                b.next = 1;
                break;

              case 8:
              case "end":
                return b.stop();
            }
        }, b, this);
    });
    return a;
}();

module.exports = fibonacci;

CoffeeScript compilation

Coffeescript needs no introduction, it has been popular for a long time now.

Installation:

npm install coffee-loader --save-dev

Add the loader to the Webpack configuration:

loaders: [{
  test: /.coffee$/,
  loader: "coffee-loader"
}]

You now can require any CoffeeScript modules using require('./src/index.coffee');

Result

Before

module.exports = ->
    square = (x) -> x * x
    math =
      root: Math.sqrt
      square: square
      cube: (x) -> x * square x

After

module.exports = function() {
  var math, square;
  square = function(x) {
    return x * x;
  };
  return math = {
    root: Math.sqrt,
    square: square,
    cube: function(x) {
      return x * square(x);
    }
  };
};

Require CSS files

Installation:

npm install css-loader --save-dev

Add the loader to the Webpack configuration:

The css-loader will create a style tag that will be injected into your page on run time. The css-loader also takes care of minification when called in the production mode (-p) e.g webpack -p

loaders: [{
  test: /.css$/,
  loader: "css-loader"
}]

You now can require any CSS file using require('./src/index.css');


Autoprefix CSS files

Installation:

npm install autoprefixer-loader --save-dev

Add the loader to the Webpack configuration:

What’s really annoying with CSS is that some properties are not implemented the same way by browsers. That’s the reason behind prefixes -ms- for IE, -moz- for Firefox and -webkit- for Chrome, Opera and Safari. The autoprefixer loader allow you to use the standards CSS properties without having to care for browser compatibility.

loaders: [{
  test: /.css$/,
  loader: "css-loader!autoprefixer-loader"
}]

You now can require any CSS file using require('./src/index.css');

Result

Before

body {
    display: flex; 
}

After

body {
    display: -webkit-box;      /* OLD - iOS 6-, Safari 3.1-6 */
    display: -ms-flexbox;      /* TWEENER - IE 10 */
    display: -webkit-flex;     /* NEW - Chrome */
    display: flex;             /* NEW, Spec - Opera 12.1, Firefox 20+ */
}

Sass compilation

Sass lets you use features that don’t exist in CSS yet like variables, nesting, mixins, inheritance etc. the code created using sass is less complex and therefore easier for developers to maintain than standard CSS.

Installation:

npm install css-loader sass-loader --save-dev

Add the loader to the Webpack configuration:

Here we use two loaders at the same time. The first one the sass-loader (read from right to left) will compile Sass into CSS then the css-loader will create a style tag that will be injected into your page on run time.

loaders: [{
  test: /.scss$/,
  loader: "css-loader!sass-loader"
}]

You now can require any Sass file using require('./src/index.scss');

Result

Before

$font-stack:    Helvetica, sans-serif;
$primary-color: #333;

body {
  font: 100% $font-stack;
  color: $primary-color;
}

After

body {
  font: 100% Helvetica, sans-serif;
  color: #333;
}

Less compilation

Less is a CSS pre-processor similar to Sass.

Installation:

npm install css-loader less-loader --save-dev

Add the loader to the Webpack configuration:

Here we use two loaders at the same time. The first one the less-loader (read from right to left) will compile Less into CSS then the css-loader will create a style tag that will be injected into your page on run time.

loaders: [{
  test: /.less$/,
  loader: "css-loader!less-loader"
}]

You now can require any Sass file using require('./src/index.less');

Result

Before

@font-stack: Helvetica, sans-serif;
@primary-color: #333;
body {
    font: 100% @font-stack;
    color: @primary-color;
}

After

body {
  font: 100% Helvetica, sans-serif;
  color: #333;
}

Move files

You can move any type of file around by using the file-loader.

Installation:

npm install file-loader --save-dev

Add the loader to the Webpack configuration:

For the example let’s try to move images from their directory to a brand new image folder with a naming convention of img-[hash].[ext].

loaders: [{
  test: /.(png|jpg|gif)$/,
  loader: "file-loader?name=img/img-[hash:6].[ext]"
}]

You now can require any image file using require('./src/image_big.jpg');

Result

the image ./src/img.jpg will be copy and renamed as such: dist/img/img-a4bd04.jpg


Encode files

Sometimes you do not want to make HTTP requests to get assets. For example, what’s the point making HTTP requests to get tiny images when you can directly access them encoded (base64) ? The url-loader does just that. What you need to do is to determine the limit (in bytes) under which you want the encoded version of the file (if the file is bigger you will get the path to it).

Installation:

npm install url-loader --save-dev

Add the loader to the Webpack configuration:

When images are under 5kb we want to get a base64 of them and when they are greater than 5kb we want to get the path to them (exactly as with the file-loader).

loaders: [{
  test: /.(png|jpg|gif)$/,
  loader: "url-loader?limit=5000&name=img/img-[hash:6].[ext]"
}]

Result

Before

var imgBig = '<img src="' + require("./src/image_big.jpg") + '" />';
var imgSmall = '<img src="' + require("./src/image_small.png") + '" />';

After

var imgBig = '<img src="img/img-a4bd04.jpg" />';
var imgSmall = '<img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAA" />';

Require HTML files

The html-loader turn any html file into a module and require any image dependency along the way!

Installation:

npm install html-loader --save-dev

Add the loader to the Webpack configuration:

loaders: [{
  test: /.html$/,
  loader: "html-loader"
}]

You now can require any HTML files using require('./src/index.html');. All images will also be treated as dependencies and therefore go through their specific stream of events (see Encode files).

Result

Before

<html>
    <head>
        <title></title>
        <meta charset="UTF-8">
        <meta name="viewport" content="width=device-width">
    </head>
    <body>
        <img src="./image_small.png">
    </body>
</html>

After

module.exports = '<html>n    
   <head>
      n        
      <title></title>
      n        
      <meta charset="UTF-8">
      n        
      <meta name="viewport" content="width=device-width">
      n    
   </head>
   n    
   <body><img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAA"></body>
   n
</html>';

Expose any module

The expose-loader loader allow you to bind any module to the global scope.

Installation:

npm install expose-loader --save-dev

Add the loader to the Webpack configuration:

In this example we want lodash (Underscore.js decorator) to be exposed in the global scope as _.

loaders: [{
  test: require.resolve("lodash"),
  loader: 'expose?_'
}]

Now when requiring lodash (require('lodash');) will also expose it globally. It is necessary for popular modules such as angularJs, jQuery, underscore, moment or hammerjs.

Creating an hybrid app in minutes with Ionic Framework

Creating hybrid apps with Ionic is really fast and powerful. I have gather a year of information to create an ultimate presentation of Ionic in a airpair blog post: A year using Ionic to build hybrid applications

If you only want a summary of what’s inside this post you can check out the below presentation. It contains all you need to know to create, build and package an hybrid app with Ionic!

[iframely]http://www.slideshare.net/julienrenaux/hybrid-apps-withionic[/iframely]

Some quick tips:

  • Develop in the browser with live reload: ionic serve
  • Add a platform (ios or Android): ionic platform add ios [android] Note: iOS development requires OS X currently
  • Build your app: ionic build
  • Simulate your app: ionic emulate
  • Run your app on a device: ionic run
  • Package an app using Ionic package service: ionic package

AngularJs, Browserify Polymer and Sass boilerplate

Source available on Github : https://github.com/shprink/angularjs-browserify-polymer-boilerplate

Two way data biding

Using Polymer and AngularJs at the same time is a bit problematic. The two way data biding (AngularJs feature) cannot natively work with Polimer web components. Some project are emerging though:

Those projects are not mature enought to use, so I decided to directly use AngularJs’s resources within Polymer elements.

Using AngularJs’s services within Polymer

var userService = angular.injector(['boilerplate']).get('UserService');

Polymer('user-list', {
    ready: function() {
        this.list = userService.getList();
    },
    tapAction: function(e) {
        console.log('tap', e);
    }
});

Using AngularJs’s scope within Polymer

Polymer('boilerplate-login', {
    ready: function() {
        this.angularScope = angular.element(this).scope();
    },
    sendForm: function(){
        this.angularScope.goToHome();
    }
});

How to install “Nike+ Fuelband” app on Android when not available in your country

In june Nike introduced it’s “Nike+ Fuelband” app for Android. This was actually a great news for the Android community that was waiting for this day to come, but unfortunately this app is only made available to certain versions of Android, and in only five countries…

With 30 million users, NikeFuel is rapidly becoming the universal measure of activity. Now, with the launch of the Nike+ FuelBand App for Android we’re delighted to help even more athletes get better […] This is another important step in the continued growth of our powerful ecosystem of digital products and services.

Stefan Olander Nike’s VP of Digital Sport

Available on some Androids

Jelly Bean 4.3 or higher and support Bluetooth LE technology are mandatory. The following devices are therefore already compatible:

  • Samsung Galaxy S3
  • Samsung Galaxy S4
  • Samsung Galaxy S5
  • HTC One
  • Nexus 5
  • Moto X

Available in five countries

IMG_20140831_201257

If you want to download the app from another country you will be disappointed to see the following message: “this item is not available in your country”

A downside of this announcement is that the app is only made available on five countries: US, UK, Canada, Germany and Japan. If you want to download the app from another country you will be disappointed to see the following message: “this item is not available in your country”.

Unfortunately this is not explained when you buy a “Fuelband SE” but no worries alternatives exist.

Install Aptoide

Aptoide is an alternative marketplace for Android applications where you can add different trusted stores. All you need to do is to download the apk and to follow the instructions. Once you have done that it is not over yet.

Add HUSSAMYAR store

HUSSAMYAR store has more than 210 applications and most of all contains the “Nike+ Fuelband” apk.

Install Nike+ Fuelband

Now open the Aptoide market and install the “Nike+ Fuelband” application, the way you would do it on Google Play.

Sources

How to use Git submodules to facilitate your development routine

I have been using git submodules for a while now and I was sometimes lost on how to use it correctly and efficiently. This post will guide you through the needs and usage of a Git submodule.

What is Git submodule?

Git submodule is a Git command that essentially allows external repositories to be embedded within a dedicated subdirectory of the source tree.

Why using Git submodule?

It often happens that while working on one project, you need to use another project from within it. Perhaps it’s a library that a third party developed or that you’re developing separately and using in multiple parent projects.

In my opinion there are several situations where using submodules is pretty useful:

  • When you want to separate a part of your code into another repository for other projects to use it and still be able tu use within your project as a dependency.
  • When you do NOT want to use a dependency manager (such as Composer for PHP). Using Composer for personal repositories can be overkill, submodules in that case is a fair choice.
  • When you do use a front end dependency manager type Bower. Bower is good but only get distribution files of each dependencies. It will never clone Git repositories (Therefor your dependencies are read-only).

Starting a repository with submodules

From scratch

Using a Git repository that do not have submodules yet.

Add a submodule

$ git submodule add git@github.com:shprink/BttrLazyLoading.git

From existing submodules

Using a Git repository that already have submodules registered.

Register submodules

$ git submodule init
Submodule 'lib/BttrLazyLoading' (git@github.com:shprink/BttrLazyLoading.git) registered for path 'lib/BttrLazyLoading'
Submodule 'lib/ShprinkOne' (git@github.com:shprink/Shprink-One.git) registered for path 'lib/ShprinkOne'

Checkout submodules

$ git submodule update
Cloning into 'lib/BttrLazyLoading'...
Submodule path 'lib/BttrLazyLoading': checked out '270b55e177ca555bb4fa559d0e663178ac5006a3'
Cloning into 'lib/ShprinkOne'...
Submodule path 'lib/ShprinkOne': checked out '0dddd0f3e24a473675022c7f79c6b1a27a095914'

From .gitmodule file

Using the .gitmodule file of another repository you can build your own by just running a shell script.

.gitmodule file

[submodule "lib/native"]
path = lib/native
url = git@github.com:shprink/BttrLazyLoading.git
[submodule "lib/mobile"]
path = lib/mobile
url = git@github.com:shprink/Shprink-One.git

Shell script

Create an empty file yourfilename.sh and paste the script below:


#!/bin/sh

set -e

git config -f .gitmodules --get-regexp '^submodule..*.path$' |
    while read path_key path
    do
        url_key=$(echo $path_key | sed 's/.path/.url/')
        url=$(git config -f .gitmodules --get "$url_key")
        git submodule add $url $path
    done

Run the script by running the command:

$ sh yourfilename.sh

Daily routine

Now that your repository is ready, it is time to start working! Using submodules can increase the number of commands you will need to run, but not necessarily. I personally gained productivity using them.

Status

The git submodule status command let you see the latest commit hash and the branch currently checked out.

$ git submodule status
270b55e177ca555bb4fa559d0e663178ac5006a3 lib/BttrLazyLoading (heads/master)
0dddd0f3e24a473675022c7f79c6b1a27a095914 lib/ShprinkOne (heads/master)

Update

The git submodule update will update your submodules according to the latest source tree (Root repository) known. It means that if your source tree is not up to date, you can end up using old versions of your submodules. This is pretty annoying, especially when working with a team.

If you want to be always up to date it means that your team needs to be rigorous on updating the source tree on every submodule’s commit…

After experiencing this problematic I decided to use the git submodule foreach command instead. It simply loop over your submodules and execute something.

update master on all submodules

$ git submodule foreach 'git pull origin master'

Commit

Now that you have worked on one or several submodules, it is time to commit your changes. to make sure you are ready to share your work run git status on all submodule:

$ git submodule foreach 'git status'
Entering 'lib/BttrLazyLoading'
# On branch master
# Changes not staged for commit:
#   (use "git add <file>..." to update what will be committed)
#   (use "git checkout -- <file>..." to discard changes in working directory)
#
#    modified:   newFile.js
#
Entering 'lib/ShprinkOne'
# On branch master
nothing to commit (working directory clean)

Add your new files

$ git submodule foreach 'git add --all'

Commit your changes

$ git submodule foreach 'git commit -m "your commit message" || true'

Using || true makes sure that the loop will not stop when a repository with nothing to commit is reached. Otherwise you will get the following error message:

Entering 'lib/ShprinkOne'
# On branch master
nothing to commit (working directory clean)
Stopping at 'lib/ShprinkOne'; script returned non-zero status.

Push to your remote

$ git submodule foreach 'git push origin master'

Sources

Introduction to Gulp.js with practical examples

What is Gulp.js?

Gulp.js is what we call a JavaScript Task Runner, it is Open Source and available on GitHub. It helps you automate repetitive tasks such as minification, compilation, unit testing, linting, etc. Gulp.js does not revolutionize automation but simplifies it tremendously.

Today the Web automation ecosystem is dominated by Grunt.js (which is a great tool BTW) but lately Gulp.js is getting more trending and soon will be overtaking Grunt.js (according to the GitHub popularity, aka “stars”: 7900 for Grunt.js and 6250 for Gulp.js).

How is it better than Grunt or Cakefile?

I would say it is no better nor worse than Grunt or Cakefile, it is different. While Cakefile or Grunt use files to execute tasks, Gulp.js uses streams. It means that a typical Cakefile or Grunt workflow would be to execute a task that dumps a temporary file, than based on this file to execute another task that dumps another temporary file an so on…

With Gulp.js everything happens on the fly using Node’s stream, temporary files are not needed anymore which make it easy to learn, use and enjoy.

Installation

Use the -g options to install Gulp globally on your machine.

sudo npm install -g gulp

Practical examples

Now let’s use Gulp a little bit. Throughout those examples we will discover how to use plugins to create specific tasks such as minifying, concatenate or even linting your code.

Source available on Github

HTML Minification

Using gulp-minify-html

npm install --save-dev gulp-minify-html

gulpfile.js:

// including plugins
var gulp = require('gulp')
, minifyHtml = require("gulp-minify-html");

// task
gulp.task('minify-html', function () {
	gulp.src('./Html/*.html') // path to your files
	.pipe(minifyHtml())
	.pipe(gulp.dest('path/to/destination'));
});

Run:

gulp minify-html

CSS Minification

Using gulp-minify-css

npm install --save-dev gulp-minify-css

gulpfile.js:

// including plugins
var gulp = require('gulp')
, minifyCss = require("gulp-minify-css");

// task
gulp.task('minify-css', function () {
	gulp.src('./Css/one.css') // path to your file
	.pipe(minifyCss())
	.pipe(gulp.dest('path/to/destination'));
});

Run:

gulp minify-css

JS Minification

Using gulp-uglify

npm install --save-dev gulp-uglify

gulpfile.js:

// including plugins
var gulp = require('gulp')
, uglify = require("gulp-uglify");

// task
gulp.task('minify-js', function () {
	gulp.src('./JavaScript/*.js') // path to your files
	.pipe(uglify())
	.pipe(gulp.dest('path/to/destination'));
});

Run:

gulp minify-js

CoffeeScript Compilation

Using gulp-coffee

npm install --save-dev gulp-coffee

gulpfile.js:

// including plugins
var gulp = require('gulp')
, coffee = require("gulp-coffee");

// task
gulp.task('compile-coffee', function () {
	gulp.src('./CoffeeScript/one.coffee') // path to your file
	.pipe(coffee())
	.pipe(gulp.dest('path/to/destination'));
});

Run:

gulp compile-coffee

Less Compilation

Using gulp-less

npm install --save-dev gulp-less

gulpfile.js:

// including plugins
var gulp = require('gulp')
, less = require("gulp-less");

// task
gulp.task('compile-less', function () {
	gulp.src('./Less/one.less') // path to your file
	.pipe(less())
	.pipe(gulp.dest('path/to/destination'));
});

Run:

gulp compile-less

Sass Compilation

Using gulp-sass

npm install --save-dev gulp-sass

gulpfile.js:

// including plugins
var gulp = require('gulp')
, sass = require("gulp-sass");

// task
gulp.task('compile-sass', function () {
	gulp.src('./Sass/one.sass') // path to your file
	.pipe(sass())
	.pipe(gulp.dest('path/to/destination'));
});

Run:

gulp compile-sass

ECMAScript 6 Compilation

Using gulp-babel

npm install --save-dev gulp-babel

gulpfile.js:

// including plugins
var gulp = require('gulp')
, babel = require("gulp-babel");

// task
gulp.task('compile-es6', function () {
	gulp.src('./ES6/one.es6.js')
        .pipe(babel())
	.pipe(gulp.dest('path/to/destination'));
});

Run:

gulp compile-es6

JavaScript Linting

Using gulp-jshint

npm install --save-dev gulp-jshint

gulpfile.js:

// including plugins
var gulp = require('gulp')
, jshint = require("gulp-jshint");

// task
gulp.task('jsLint', function () {
	gulp.src('./JavaScript/*.js') // path to your files
	.pipe(jshint())
	.pipe(jshint.reporter()); // Dump results
});

In case of success:

[gulp] Starting 'jsLint'...
[gulp] Finished 'jsLint' after 6.47 ms

In case of failure:

[gulp] Starting 'jsLint'...
[gulp] Finished 'jsLint' after 5.86 ms
/var/www/gulp-examples/JavaScript/two.js: line 3, col 15, Expected '}' to match '{' from line 3 and instead saw '['.
/var/www/gulp-examples/JavaScript/two.js: line 3, col 16, Missing semicolon.
/var/www/gulp-examples/JavaScript/two.js: line 3, col 154, Expected an assignment or function call and instead saw an expression.

3 errors

Run:

gulp jsLint

CoffeeScript Linting

Using gulp-coffeelint

npm install --save-dev gulp-coffeelint

gulpfile.js:

// including plugins
var gulp = require('gulp')
, coffeelint = require("gulp-coffeelint");

// task
gulp.task('coffeeLint', function () {
	gulp.src('./CoffeeScript/*.coffee') // path to your files
	.pipe(coffeelint())
	.pipe(coffeelint.reporter());
});

In case of success:

[gulp] Starting 'coffeeLint'...
[gulp] Finished 'coffeeLint' after 7.37 ms

In case of failure:

[gulp] Starting 'coffeeLint'...
[gulp] Finished 'coffeeLint' after 6.25 ms

one.coffee
?  line 3  Line contains a trailing semicolon

? 1 error

Run:

gulp coffeeLint

Rename a file

Using gulp-rename

npm install --save-dev gulp-rename

gulpfile.js:

// including plugins
var gulp = require('gulp')
, rename = require('gulp-rename')
, coffee = require("gulp-coffee");

// task
gulp.task('rename', function () {
	gulp.src('./CoffeeScript/one.coffee') // path to your file
	.pipe(coffee())  // compile coffeeScript
	.pipe(rename('renamed.js')) // rename into "renamed.js" (original name "one.js")
	.pipe(gulp.dest('path/to/destination'));
});

Run:

gulp rename

Concatenate files

Using gulp-concat

npm install --save-dev gulp-concat

gulpfile.js:

// including plugins
var gulp = require('gulp')
, concat = require("gulp-concat");

// task
gulp.task('concat', function () {
	gulp.src('./JavaScript/*.js') // path to your files
	.pipe(concat('concat.js'))  // concat and name it "concat.js"
	.pipe(gulp.dest('path/to/destination'));
});

Run:

gulp concat

Add copyright

Using gulp-header

npm install --save-dev gulp-header

Copyright

/*
Gulp Examples by @julienrenaux:

* https://github.com/shprink
* https://twitter.com/julienrenaux
* https://www.facebook.com/julienrenauxblog

Full source at https://github.com/shprink/gulp-examples

MIT License, https://github.com/shprink/gulp-examples/blob/master/LICENSE
*/

gulpfile.js:

// including plugins
var gulp = require('gulp')
, fs = require('fs')
, concat = require("gulp-concat")
, header = require("gulp-header");

// functions

// Get copyright using NodeJs file system
var getCopyright = function () {
	return fs.readFileSync('Copyright');
};

// task
gulp.task('concat-copyright', function () {
	gulp.src('./JavaScript/*.js') // path to your files
	.pipe(concat('concat-copyright.js')) // concat and name it "concat-copyright.js"
	.pipe(header(getCopyright()))
	.pipe(gulp.dest('path/to/destination'));
});

Run:

gulp concat-copyright

Add copyright with version

Using gulp-header and Node’s file system

npm install --save-dev gulp-header

Copyright

/*
Gulp Examples by @julienrenaux:

* https://github.com/shprink
* https://twitter.com/julienrenaux
* https://www.facebook.com/julienrenauxblog

Version: <%= version %>
Full source at https://github.com/shprink/gulp-examples

MIT License, https://github.com/shprink/gulp-examples/blob/master/LICENSE
*/

Version

1.0.0

gulpfile.js:

// including plugins
var gulp = require('gulp')
, fs = require('fs')
, concat = require("gulp-concat")
, header = require("gulp-header");

// functions

// Get version using NodeJs file system
var getVersion = function () {
	return fs.readFileSync('Version');
};

// Get copyright using NodeJs file system
var getCopyright = function () {
	return fs.readFileSync('Copyright');
};

// task
gulp.task('concat-copyright-version', function () {
	gulp.src('./JavaScript/*.js')
	.pipe(concat('concat-copyright-version.js')) // concat and name it "concat-copyright-version.js"
	.pipe(header(getCopyrightVersion(), {version: getVersion()}))
	.pipe(gulp.dest('path/to/destination'));
});

Run:

gulp concat-copyright-version

Mix them up (Lint, Concat, Compile, Minify etc.)

The purpose of this task is to mix the previous tasks into just one.

Copyright

/*
Gulp Examples by @julienrenaux:

* https://github.com/shprink
* https://twitter.com/julienrenaux
* https://www.facebook.com/julienrenauxblog

Version: <%= version %>
Full source at https://github.com/shprink/gulp-examples

MIT License, https://github.com/shprink/gulp-examples/blob/master/LICENSE
*/

Version

1.0.0

gulpfile.js:

// including plugins
var gulp = require('gulp')
, fs = require('fs')
, coffeelint = require("gulp-coffeelint")
, coffee = require("gulp-coffee")
, uglify = require("gulp-uglify")
, concat = require("gulp-concat")
, header = require("gulp-header");

// functions

// Get version using NodeJs file system
var getVersion = function () {
	return fs.readFileSync('Version');
};

// Get copyright using NodeJs file system
var getCopyright = function () {
	return fs.readFileSync('Copyright');
};

// task
gulp.task('bundle-one', function () {
	gulp.src('./CoffeeScript/*.coffee') // path to your files
	.pipe(coffeelint()) // lint files
	.pipe(coffeelint.reporter('fail')) // make sure the task fails if not compliant
	.pipe(concat('bundleOne.js')) // concat files
	.pipe(coffee()) // compile coffee
	.pipe(uglify()) // minify files
	.pipe(header(getCopyrightVersion(), {version: getVersion()})) // Add the copyright
	.pipe(gulp.dest('path/to/destination'));
});

Run:

gulp bundle-one

Tasks automation

Using gulp.watch you can easily automate any tasks when files are modified. It is really convenient because you do not have to run single tasks by hand every time a file is modified, and therefore your code is always up to date.

// including plugins
var gulp = require('gulp');

// task
gulp.task('watch-coffeescript', function () {
    gulp.watch(['./CoffeeScript/*.coffee'], ['compile-coffee']);
});

Run:

Now run the task:

gulp watch-coffeescript

and see what happens when you modify one of the source file.

How to setup name-based Virtual Hosts on Ubuntu 13.10 and Apache 2.4.6

Let’s imagine you have a project named myvhost-www within the folder /var/www. If you want to access it through your browser the URL would be http://localhost/myvhost-www/

Would not it be better to use the URL myvhost.com? To do so you will need to use a name-based Virtual Host.

With name-based virtual hosting, the server relies on the client to report the hostname as part of the HTTP headers. Using this technique, many different hosts can share the same IP address (which is the case for your personal computer).

Step 1: Create a Virtual Host configuration file

Create the file myvhost.conf within /etc/apache2/sites-available folder.

The configuration file MUST have a .conf extension
$ cd /etc/apache2/sites-available
# create an empty file
$ sudo touch myvhost.conf
# Edit it
$ sudo vi myvhost.conf

Within this file insert the following content:

<VirtualHost *:80>
        ServerName myvhost.com
        DocumentRoot "/var/www/myvhost-www"
</VirtualHost>
Inside each block, you will need at minimum a ServerName directive to designate which host is served and a DocumentRoot directive to show where in the filesystem the content for that host lives. In the normal case where any and all IP addresses on the server should be used, you can use * as the argument to NameVirtualHost (In this usual case, this will be “*:80”).

Step 2: Enable the Virtual Host

# Enabling site myvhost
$ sudo a2ensite myvhost.conf

# To activate the new configuration, you need to run:
$ sudo service apache2 reload

If you go check on the sites-enabled folder you will see that your new virtual host is enabled and point to the file we created previously.

$ cd /etc/apache2/sites-enabled
$ ll
lrwxrwxrwx 1 root root   31 Mar  8 18:47 myvhost.conf -> ../sites-available/myvhost.conf

Step 3: Register the Virtual Host to the hosts file

The hosts file is a computer file used by an operating system to map hostnames to IP addresses.

$ sudo vi /etc/hosts

Add the following line to this file:

127.0.0.1       myvhost.com
Entries in the hosts file will have precedence over DNS by default which means that even if myvhost.com domain name exist it will not attempt to look up the record in DNS.

Step 4: Enable mod_rewrite

mod_rewrite provides a way to modify incoming URL requests, dynamically, based on regular expression rules. This allows you to map arbitrary URLs onto your internal URL structure in any way you like.

sudo a2enmod rewrite

You have now configured a Virtual Host and are able to point to myvhost.com and see the same content as on http://localhost/myvhost-www/

Sources

Leading the SVN to Git migration

Once upon a time I started a new challenge within a new company. This company grew up so fast that the development processes and tools were not suitable anymore. Most projects started with only one developer and, at the time the company chose Subversion (SVN) as the main Concurrent Versions System (CVS) and a classic Waterfall process (linear) to manage projects. This choices froze things for years..

Coming from a large team (10 members) in a totally Agile environment the gap I faced was kind of huge. The lack of productivity that the company faced resulted from those ancient choices. A colleague and I decided to change things by introducing several development tools such as Git, Gitlab (GitHub alike), Jenkins (Continuous Integration), Composer (Dependency Manager for PHP) and Scrum (Agile development framework).

The SVN to GIT transition can be pretty hard to handle, depending on the size of your company, the number of repositories at stake and the number branches, tags created…

To avoid losing time and money, things have to be well prepared. In this post I will share my experience and some tricks to lead successfully the SVN to GIT transition.

Why companies still use Subversion?

Despite the popularity of GIT nowadays there is still a tremendous amount of company that still use Subversion

Git is not better than Subversion. But is also not worse. It’s different. There are many reasons possible why companies still use Subversion:

  1. Subversion’s initial release was in 2000 while Git was in 2005. Any Software Company created before 2005 might therefore have used Subversion (It is the case of eBay, inc. for example)
  2. Subversion is less complex and suit better developers working alone. Indeed Git adds complexity. Two modes of creating repositories, checkout vs. clone, commit vs. push… You have to know which commands work locally and which work with “the remote”
  3. The company grew up too quickly: The lack of time and money that most start-ups face can lead to underestimating developers needs and making unreasonable architecture choices.
  4. The fear of changing well explained by Kurt Lewin being a three-stage process:
    1. Unfreezing: dismantling the existing “mind set”, defense mechanisms have to be bypassed
    2. Movement: When the change occurs, this is typically a period of confusion and transition
    3. Freezing: The new mindset is crystallizing and one’s comfort level is returning to previous levels

What does Git do better than SVN?

I invite you to watch this video about Linus Torvalds (Git creator and Linux project’s coordinator) tech talk at Google.

If you like using CVS, you should be in some kind of mental institution or somewhere else. Linus Torvalds

Git is Scalable

Git is perfect to handle medium to large projects because it is scalable, the more files and developers involved in your project the more you can leverage from it.

Git is Distributed

Git is a DVCS (Distributed Version Control System), meaning that every user has a complete copy of the repository data (including history) stored locally (While SVN has one Central repository). Developers can therefore commit off-line, and synchronize their work to distant repositories when back online.

Git branching and merging support is a lot better

SVN isn’t branch-centric while Git is designed around the idea of branching. Making branches, using branches, merging two branches together, merging three branches together, merging branches from local and remote repositories together – Git is good at branches.

Git Flow

Git-Flow is a set of git extensions that handle most high-level repository operations (feature, hotfix and release) based on Vincent Driessen’s branching model. I recommend using this tool but only if you understand Git basic commands (git branch, git checkout, git pull etc.) otherwise in some cases you will get lost.

OSX Installation
$ brew install git-flow
Linux Installation
$ apt-get install git-flow
Windows (Cygwin) Installation
$ wget -q -O - --no-check-certificate https://github.com/nvie/gitflow/raw/develop/contrib/gitflow-installer.sh | bash

Git is Speed

Git operations are much faster than on SVN. All operations (except for push and pull) are done locally, there is therefore no network latency involved for most of the daily routine commands (git diff, git log, git commit, git branch, git merge etc.). A Git repository is also around 30x smaller than a SVN repository which is not negligible when cloning (backup).

Preparation

Teaching

Being on GitHub does not mean being competent at GIT!

Teaching Git to developers is essential. Git can be hard to learn, especially if you are used to SVN. You need to insist on what’s better than SVN and the things that you now can do with Git that you could not with SVN. I have seen developers on Github that were lost using Git in a work environment, being on GitHub does not mean being competent at GIT!

Here is some of the slides I presented to developers:

[iframely]http://www.slideshare.net/julienrenaux/git-presentation-31666204[/iframely]

Creating a authors.txt file to map SVN users to Git users.

SVN and Git do not store the same way the developer identity when committing. SVN stores the username while Git stores the full name and the email address.

Therefore prior to migrating to Git, you need to create an author mapping file that has the following format:

fdeveloper  = First Developer <first.developer@company.com>
sdeveloper = Second Developer <second.developer@company.com>
tdeveloper  = Third Developer <third.developer@company.com>
etc..

If you have missed this step and already migrated to Git, don’t worry, we still can rewrite history using the command git filter-branch!

git filter-branch --commit-filter '
        if [ "$GIT_COMMITTER_NAME" = "fdeveloper" ]; // SVN username
        then
                GIT_COMMITTER_NAME="First Developer";
                GIT_AUTHOR_NAME="First Developer";
                GIT_COMMITTER_EMAIL="first.developer@company.com";
                GIT_AUTHOR_EMAIL="first.developer@company.com";
                git commit-tree "$@";
        else
                git commit-tree "$@";
        fi' HEAD

This command will go through every single commit and if necessary change the developer information.

Using this command could be pretty slow depending of the number of commit involved.

Migrating

git svn is a simple conduit for changesets between Subversion and git. It provides a bidirectional flow of changes between a Subversion and a git repository.

Clone

For a complete transparent transition (importing commit history) you will need to use GitSvn

$ git svn clone -A ~/Desktop/authors.txt svn://IP@/Project/trunk .

You can also tell git svn not to include the metadata that Subversion normally imports, by passing –no-metadata

$ git svn clone -A ~/Desktop/authors.txt svn://IP@/Project/trunk . --no-metadata

Migrate tags

This takes the references that were remote branches that started with tag/ and makes them real (lightweight) tags.

$ git for-each-ref refs/remotes/tags | cut -d / -f 4- | grep -v @ | while read tagname; do git tag "$tagname" "tags/$tagname"; git branch -r -d "tags/$tagname"; done

Conclusion

Do not underestimate the unwillingness of your coworkers!

If you want to succeed your migration I suggest you spend a lot of time teaching Git to developers, confront them with real cases and things that Git do better than SVN, do not underestimate the unwillingness of your coworkers!

Once everybody is up to date with Git, then you can start working on your migration. If you use GitSvn things should be pretty smooth but do not forget to make backups, just in case 😉

Sources

Create a “load more” widget using AngularJS, Ajax and Bootstrap 3

<

div class=”btn-group btn-group-justified”>
Demo<a class=”btn btn-info btn-lg btn-block”” href=”http://julienrenaux.fr/wp-content/uploads/2013/10/load-more-angularjs.zip”> Download

Prerequisite

Make sure you have basics on Bootstrap 3 Framework and AngularJS. If you are used to Boostrap 2.3.2 here is a post I wrote about what’s new in Twitter Bootstrap 3

Step 1: Getting the data ready

Before doing anything a good practice is to focus on the data format you want your API to return. Within this demo we will not focus on how to get the data (BackEnd code) but instead we will just pretend the API works and return the data we want in the way we want.

An example of the backend code using PHP and WordPress is shown on the demo.

By knowing the format, data manipulation gets easier. For my demo I want the API to return an array filled with post objects: [{},{}, ...] which is a pretty classic format.

Example

[
    {
        "post_title":"How to automatically checkout the latest tag of a Git repository",
        "post_content":"Now that you know how to automatically tag a Git repository an interesting command to know about is "How to automatically checkout the latest tag of a Git repository". Let's imagine that you want to a...",
        "post_name":"how-to-automatically-checkout-the-latest-tag-of-a-git-repository",
        "post_date":"2013-10-04 18:31:00",
        "post_img":"path/to/your/img.png",
        "ID":"1292"
    },
    {
        "post_title":"Sidr",
        "post_content":"rn rn Sidr is a jQuery plugin for creating side menus. It is the library that I use to create "Previous" and "Next" post button on my Open Sourced WordPress theme (the one you are using righ...",
        "post_name":"sidr",
        "post_date":"2013-09-24 20:16:04",
        "post_img":"path/to/your/img.png",
        "ID":"1279"
    }
]

Step 2: Creating a basic scaffolding

Insert Scripts and Style-sheets

Insert the following lines within the head section of your HTML.

<link href="//netdna.bootstrapcdn.com/bootstrap/3.0.0/css/bootstrap.min.css" type="text/css" rel="stylesheet">
<script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.2.0-rc.2/angular.min.js"></script>
<script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.2.0-rc.2/angular-resource.min.js"></script>

Create a module

Creating a module facilitate the dependency handling. At the moment we will not have any dependencies but we will add a Factory later on.

<script type="text/javascript">
    angular.module('demo');
</script>

To link the module to the DOM you need to tells AngularJS to be active in a portion of the page by adding data-ng-app attribute. In this case the entire document.

<body data-ng-app="demo">
</body>

Add a controller

AngularJS introduces the MVC (Model, View, Controller) model and dependency injection, both concepts are illustrated here. Our Controller MyCtrl can manipulate the Model $scope that was injected via MyCtrl.$inject. Those concept are extremely important to understand in order to go further.

<script type="text/javascript">
    angular.module('demo');

    function MyCtrl($scope) {}
    MyCtrl.$inject = ['$scope'];
</script>

We now have a basic AngularJS application ready.

Step 3: Creating a Factory

An elegant way communicate with the server is to create a Factory. This lets the controller focus on the behavior rather than the complexities of server access. Interaction with server-side data sources can be made through the module “ngResource” (previously included: angular-resource.min.js).

<script type="text/javascript">
    angular.module('demo', ['demoFactory']);

    angular.module('demoFactory', ['ngResource']).factory('ResourceFactory', function($resource) {
        return $resource('?desiredPosts=:desiredPosts&start=:start', {}, {
            // Declaration of custom action that should extend the default set of resource actions
            query: {
                isArray: true,
                cache: false
            }
        });
    });
    function MyCtrl($scope, ResourceFactory,) {}
    MyCtrl.$inject = ['$scope', 'ResourceFactory'];
</script>
Please note that I have added the newly created demoFactory as a dependency of my module and the ResourceFactory object to my controller MyCtrl via MyCtrl.$inject.

Step 4: Interacting with the DOM (Document Object Model)

To understand how AngularJS manipulates the DOM let’s try to display the number of posts already loaded within our widget. To do so in our controller we initiate the post list $scope.list to an empty array (no posts loaded at the beginning) and set the $scope.count property to the list length.

function MyCtrl($scope, ResourceFactory,) {
    $scope.list = []
    $scope.count = $scope.list.length;
}

$scope contains your model data. It is the glue between the controller and the view (HTML). To access the data within the view you must declare the data binding locations using {{ }}. AngularJS will automatically update the post list and the post count whenever the $scope properties changes.

Now let’s use a Bootstrap 3 panel to create our widget. We need a header with the widget title (plus the post count) and a post body.

<body data-ng-app="demo">
    <div id="widget" class="panel panel-default" data-ng-controller="MyCtrl">
        <div class="panel-heading">Widget Title <span class="badge">{{count}}</span></div>
        <div class="panel-body"></div>
    </div>
</body>

As soon as we will load posts into our list the {{count}} property that you can see in the HTML above will be updated.

Create an empty “load more” function

Now that we know how AngularJS updates the DOM we need to populate it with fresh data. Let’s create a function “loadMore” that we will trigger on a button click event.

function MyCtrl($scope, ResourceFactory,) {
    $scope.list = []
    $scope.count = $scope.list.length;
    $scope.loadMore = function(e) {}
}

Bind the “load more” function to a DOM node

data-ng-click is a directive, directives are instructions that tell the AngularJS compiler to attach a given behavior to a DOM node when a certain marker appears in the DOM. Here the directive will trigger the “loadMore” function once clicked.

<body data-ng-app="demo">
    <div id="widget" class="panel panel-default" data-ng-controller="MyCtrl">
        <div class="panel-heading">Widget Title <span class="badge">{{count}}</span></div>
        <button class="more btn btn-primary btn-block" data-ng-click="loadMore($event)">More</button>
    </div>
</body>

Populate the “load more” function

To keep it simple the “load more” function will query our factory “ResourceFactory” with a start and desiredPosts parameters. The result will be stored inside $scope.list.

function MyCtrl($scope, ResourceFactory,) {
    $scope.list = []
    $scope.count = $scope.list.length;
    $scope.loadMore = function(e) {
        ResourceFactory.query({
            start: $scope.count,
            desiredPosts: 2
        }, function(data) {
            if (data.length > 0) {
                // update list
                $scope.list = $scope.list.concat(data);
            }
        });
    }
}

Now every time a user hits the “More” button an Ajax request will be sent and fresh new data will be populated inside $scope.list.

Iterate through the list

By using the data-ng-repeat directive we tell AngularJS to inject the $scope.list items within the DOM. Every time $scope.list is updated the DOM will be automatically updated.

<body data-ng-app="demo">
    <div id="widget" class="panel panel-default" data-ng-controller="MyCtrl">
        <div class="panel-heading">Widget Title <span class="badge">{{count}}</span></div>
        <div class="content panel-body">
            <div id="item-{{item.ID}}" class="item" data-ng-repeat="item in list">
                <img class="thumbnail pull-left" src="{{item.post_img}}">
                <a href="#">
                    <h4 style="margin: 5px 0px;">{{item.post_title}}</h4>
                </a>
                <p>{{item.post_content}}</p>
                <div class="well well-sm">{{item.post_date}}</div>
            </div>
        </div>
        <button class="more btn btn-primary btn-block" data-ng-click="loadMore($event)">More</button>
    </div>
</body>
Please note that you can access items properties using {{item.yourProperty}} within the data-ng-repeat directive.

Our “Load More” widget is now finished. Go checkout the demo to see the result!