How to install NodeJS 4.x on Ubuntu

Via NVM (Node Version Manager)

# get the latest nvm release https://github.com/creationix/nvm/releases
curl https://raw.githubusercontent.com/creationix/nvm/v0.29.0/install.sh | bash

# OR
wget -qO- https://raw.githubusercontent.com/creationix/nvm/v0.29.0/install.sh | bash

# Restart your terminal or run
source ~/.profile

# Install the NodeJS version needed
nvm install 4.1

# update npm
npm install npm -g

Via PPA

curl -sL https://deb.nodesource.com/setup_4.x | sudo bash -
sudo apt-get install -y nodejs

# update npm
npm install npm -g

Changing apache2 document root in ubuntu 14.x

Having the default Apache document root to /etc/www can be sometimes annoying because of permissions. I strongly recommend to use a folder into your home folder.

For the sake of this post, I will use the folder Sites (that I created under my personal folder /home/shprink/Sites/).

PS: The Apache version used for this post is: 2.4.10 (Ubuntu), you can see yours using apache2 -v command.

Changing apache2 document root

The default document root is set in the 000-default.conf file that is under /etc/apache2/sites-available folder.

$ cd /etc/apache2/sites-available
$ sudo nano 000-default.conf

While the file is opened change DocumentRoot /var/www/ with your new folder e.g DocumentRoot /home/shprink/Sites/

Set the right Apache configuration

The configuration of the /etc/www folder is under /etc/apache2/apache2.conf. Edit this file to add the configuration of your new document root.

$ sudo nano /etc/apache2/apache2.conf

Copy the following:

<Directory /var/www/>
       Options Indexes FollowSymLinks
       AllowOverride None
       Require all granted
</Directory>

and change the directory path:

<Directory /home/shprink/Sites/>
       Options Indexes FollowSymLinks
       AllowOverride None
       Require all granted
</Directory>

Restart Apache

$ sudo service apache2 restart

Set the right permission

All of your document root parent folders must be executable by everyone. To know if it is the case you can use the utility namei that list the permissions along each component of the path:

$ namei -m /home/shprink/Sites/
f: /home/shprink/Sites/
 drwxr-xr-x /
 drwxr-xr-x home
 drwx------ shprink
 drwx------ Sites

Here as you can see that shprink and Sites permissions are not set properly.

Open http://localhost/ in your browser, you should get the following message:

Forbidden
You don’t have permission to access / on this server.

Open the apache error log to see the exact error code e.g AH00035. It might help you to get more information.

$ sudo tail -f /var/log/apache2/error.log
[Mon Apr 06 09:04:26.518260 2015] [core:error] [pid 22139] (13)Permission denied: [client 127.0.0.1:45121] AH00035: access to / denied (filesystem path '/home/shprink/Sites') because search permissions are missing on a component of the path

To fix the permission problem for good, using chmod +755 should be enough.

$ chmod +755 /home/shprink/
$ chmod +755 /home/shprink/Sites/

Re run namei to make sure everything is ok.

$ namei -m ~/Sites/
f: /home/shprink/Sites/
 drwxr-xr-x /
 drwxr-xr-x home
 drwxr-xr-x shprink
 drwxr-xr-x Sites

Now opening http://localhost/ should work as expected. If you are having trouble please leave a comment.

Introduction to Webpack with practical examples

Webpack is taking the task automation market by a storm. I have been using it for months now and for most of my needs Webpack took over Grunt and Gulp.

Webpack is a module bundler, it takes modules with dependencies and generates static assets representing those modules.

This post will only focus about Webpack “loaders” and “post loaders”.

Webpack Loaders

Loaders are pieces of code that can be injected in the middle of the compilation stream. Post loaders are called at the end of the stream.

webpack can only process JavaScript natively, but loaders are used to transform other resources into JavaScript. By doing so, every resource forms a module.

Source available on Github

Prerequisite

You will need to have Node and Npm installed on your machine to go through the following examples.

Install

npm install webpack --save-dev

Now create a webpack.config.js file and dump this basic scaffolding in it:

var webpack = require('webpack'),
    path = require('path');

module.exports = {
    debug: true,
    entry: {
        main: './index.js'
    },
    output: {
        path: path.join(__dirname, 'dist'),
        filename: '[name].js'
    },
    module: {
        loaders: []
    }
};

You main entry point is now index.js at the root of your folder. The compiled file will be dumped in the dist folder when compiling.

Compile

When your entry point is defined you can start the compilation using the CLI:

# Debug mode
webpack

# Production mode (minified version)
webpack -p

ECMAScript 6 compilation

ECMAScript 6 introduce tones of new features (Arrows, Classes, Generators, Modules etc.) that can be used right now! To do so I recommend using Babeljs.

Installation:

npm install babel-loader --save-dev

Add the loader to the Webpack configuration:

loaders: [{
  test: /.es6.js$/,
  loader: "babel-loader"
}]

You now can require any ES6 modules using require('./src/index.es6.js');

Result

Before

// Generators
var fibonacci = {
    [Symbol.iterator]: function*() {
        var pre = 0,
            cur = 1;
        for (;;) {
            var temp = pre;
            pre = cur;
            cur += temp;
            yield cur;
        }
    }
}

module.exports = fibonacci;

After

"use strict";

var fibonacci = function() {
    var a = {};
    a[Symbol.iterator] = regeneratorRuntime.mark(function b() {
        var a, c, d;
        return regeneratorRuntime.wrap(function e(b) {
            while (1) switch (b.prev = b.next) {
              case 0:
                a = 0, c = 1;

              case 1:
                d = a;
                a = c;
                c += d;
                b.next = 6;
                return c;

              case 6:
                b.next = 1;
                break;

              case 8:
              case "end":
                return b.stop();
            }
        }, b, this);
    });
    return a;
}();

module.exports = fibonacci;

CoffeeScript compilation

Coffeescript needs no introduction, it has been popular for a long time now.

Installation:

npm install coffee-loader --save-dev

Add the loader to the Webpack configuration:

loaders: [{
  test: /.coffee$/,
  loader: "coffee-loader"
}]

You now can require any CoffeeScript modules using require('./src/index.coffee');

Result

Before

module.exports = ->
    square = (x) -> x * x
    math =
      root: Math.sqrt
      square: square
      cube: (x) -> x * square x

After

module.exports = function() {
  var math, square;
  square = function(x) {
    return x * x;
  };
  return math = {
    root: Math.sqrt,
    square: square,
    cube: function(x) {
      return x * square(x);
    }
  };
};

Require CSS files

Installation:

npm install css-loader --save-dev

Add the loader to the Webpack configuration:

The css-loader will create a style tag that will be injected into your page on run time. The css-loader also takes care of minification when called in the production mode (-p) e.g webpack -p

loaders: [{
  test: /.css$/,
  loader: "css-loader"
}]

You now can require any CSS file using require('./src/index.css');


Autoprefix CSS files

Installation:

npm install autoprefixer-loader --save-dev

Add the loader to the Webpack configuration:

What’s really annoying with CSS is that some properties are not implemented the same way by browsers. That’s the reason behind prefixes -ms- for IE, -moz- for Firefox and -webkit- for Chrome, Opera and Safari. The autoprefixer loader allow you to use the standards CSS properties without having to care for browser compatibility.

loaders: [{
  test: /.css$/,
  loader: "css-loader!autoprefixer-loader"
}]

You now can require any CSS file using require('./src/index.css');

Result

Before

body {
    display: flex; 
}

After

body {
    display: -webkit-box;      /* OLD - iOS 6-, Safari 3.1-6 */
    display: -ms-flexbox;      /* TWEENER - IE 10 */
    display: -webkit-flex;     /* NEW - Chrome */
    display: flex;             /* NEW, Spec - Opera 12.1, Firefox 20+ */
}

Sass compilation

Sass lets you use features that don’t exist in CSS yet like variables, nesting, mixins, inheritance etc. the code created using sass is less complex and therefore easier for developers to maintain than standard CSS.

Installation:

npm install css-loader sass-loader --save-dev

Add the loader to the Webpack configuration:

Here we use two loaders at the same time. The first one the sass-loader (read from right to left) will compile Sass into CSS then the css-loader will create a style tag that will be injected into your page on run time.

loaders: [{
  test: /.scss$/,
  loader: "css-loader!sass-loader"
}]

You now can require any Sass file using require('./src/index.scss');

Result

Before

$font-stack:    Helvetica, sans-serif;
$primary-color: #333;

body {
  font: 100% $font-stack;
  color: $primary-color;
}

After

body {
  font: 100% Helvetica, sans-serif;
  color: #333;
}

Less compilation

Less is a CSS pre-processor similar to Sass.

Installation:

npm install css-loader less-loader --save-dev

Add the loader to the Webpack configuration:

Here we use two loaders at the same time. The first one the less-loader (read from right to left) will compile Less into CSS then the css-loader will create a style tag that will be injected into your page on run time.

loaders: [{
  test: /.less$/,
  loader: "css-loader!less-loader"
}]

You now can require any Sass file using require('./src/index.less');

Result

Before

@font-stack: Helvetica, sans-serif;
@primary-color: #333;
body {
    font: 100% @font-stack;
    color: @primary-color;
}

After

body {
  font: 100% Helvetica, sans-serif;
  color: #333;
}

Move files

You can move any type of file around by using the file-loader.

Installation:

npm install file-loader --save-dev

Add the loader to the Webpack configuration:

For the example let’s try to move images from their directory to a brand new image folder with a naming convention of img-[hash].[ext].

loaders: [{
  test: /.(png|jpg|gif)$/,
  loader: "file-loader?name=img/img-[hash:6].[ext]"
}]

You now can require any image file using require('./src/image_big.jpg');

Result

the image ./src/img.jpg will be copy and renamed as such: dist/img/img-a4bd04.jpg


Encode files

Sometimes you do not want to make HTTP requests to get assets. For example, what’s the point making HTTP requests to get tiny images when you can directly access them encoded (base64) ? The url-loader does just that. What you need to do is to determine the limit (in bytes) under which you want the encoded version of the file (if the file is bigger you will get the path to it).

Installation:

npm install url-loader --save-dev

Add the loader to the Webpack configuration:

When images are under 5kb we want to get a base64 of them and when they are greater than 5kb we want to get the path to them (exactly as with the file-loader).

loaders: [{
  test: /.(png|jpg|gif)$/,
  loader: "url-loader?limit=5000&name=img/img-[hash:6].[ext]"
}]

Result

Before

var imgBig = '<img src="' + require("./src/image_big.jpg") + '" />';
var imgSmall = '<img src="' + require("./src/image_small.png") + '" />';

After

var imgBig = '<img src="img/img-a4bd04.jpg" />';
var imgSmall = '<img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAA" />';

Require HTML files

The html-loader turn any html file into a module and require any image dependency along the way!

Installation:

npm install html-loader --save-dev

Add the loader to the Webpack configuration:

loaders: [{
  test: /.html$/,
  loader: "html-loader"
}]

You now can require any HTML files using require('./src/index.html');. All images will also be treated as dependencies and therefore go through their specific stream of events (see Encode files).

Result

Before

<html>
    <head>
        <title></title>
        <meta charset="UTF-8">
        <meta name="viewport" content="width=device-width">
    </head>
    <body>
        <img src="./image_small.png">
    </body>
</html>

After

module.exports = '<html>n    
   <head>
      n        
      <title></title>
      n        
      <meta charset="UTF-8">
      n        
      <meta name="viewport" content="width=device-width">
      n    
   </head>
   n    
   <body><img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAA"></body>
   n
</html>';

Expose any module

The expose-loader loader allow you to bind any module to the global scope.

Installation:

npm install expose-loader --save-dev

Add the loader to the Webpack configuration:

In this example we want lodash (Underscore.js decorator) to be exposed in the global scope as _.

loaders: [{
  test: require.resolve("lodash"),
  loader: 'expose?_'
}]

Now when requiring lodash (require('lodash');) will also expose it globally. It is necessary for popular modules such as angularJs, jQuery, underscore, moment or hammerjs.

How to use Git submodules to facilitate your development routine

I have been using git submodules for a while now and I was sometimes lost on how to use it correctly and efficiently. This post will guide you through the needs and usage of a Git submodule.

What is Git submodule?

Git submodule is a Git command that essentially allows external repositories to be embedded within a dedicated subdirectory of the source tree.

Why using Git submodule?

It often happens that while working on one project, you need to use another project from within it. Perhaps it’s a library that a third party developed or that you’re developing separately and using in multiple parent projects.

In my opinion there are several situations where using submodules is pretty useful:

  • When you want to separate a part of your code into another repository for other projects to use it and still be able tu use within your project as a dependency.
  • When you do NOT want to use a dependency manager (such as Composer for PHP). Using Composer for personal repositories can be overkill, submodules in that case is a fair choice.
  • When you do use a front end dependency manager type Bower. Bower is good but only get distribution files of each dependencies. It will never clone Git repositories (Therefor your dependencies are read-only).

Starting a repository with submodules

From scratch

Using a Git repository that do not have submodules yet.

Add a submodule

$ git submodule add git@github.com:shprink/BttrLazyLoading.git

From existing submodules

Using a Git repository that already have submodules registered.

Register submodules

$ git submodule init
Submodule 'lib/BttrLazyLoading' (git@github.com:shprink/BttrLazyLoading.git) registered for path 'lib/BttrLazyLoading'
Submodule 'lib/ShprinkOne' (git@github.com:shprink/Shprink-One.git) registered for path 'lib/ShprinkOne'

Checkout submodules

$ git submodule update
Cloning into 'lib/BttrLazyLoading'...
Submodule path 'lib/BttrLazyLoading': checked out '270b55e177ca555bb4fa559d0e663178ac5006a3'
Cloning into 'lib/ShprinkOne'...
Submodule path 'lib/ShprinkOne': checked out '0dddd0f3e24a473675022c7f79c6b1a27a095914'

From .gitmodule file

Using the .gitmodule file of another repository you can build your own by just running a shell script.

.gitmodule file

[submodule "lib/native"]
path = lib/native
url = git@github.com:shprink/BttrLazyLoading.git
[submodule "lib/mobile"]
path = lib/mobile
url = git@github.com:shprink/Shprink-One.git

Shell script

Create an empty file yourfilename.sh and paste the script below:


#!/bin/sh

set -e

git config -f .gitmodules --get-regexp '^submodule..*.path$' |
    while read path_key path
    do
        url_key=$(echo $path_key | sed 's/.path/.url/')
        url=$(git config -f .gitmodules --get "$url_key")
        git submodule add $url $path
    done

Run the script by running the command:

$ sh yourfilename.sh

Daily routine

Now that your repository is ready, it is time to start working! Using submodules can increase the number of commands you will need to run, but not necessarily. I personally gained productivity using them.

Status

The git submodule status command let you see the latest commit hash and the branch currently checked out.

$ git submodule status
270b55e177ca555bb4fa559d0e663178ac5006a3 lib/BttrLazyLoading (heads/master)
0dddd0f3e24a473675022c7f79c6b1a27a095914 lib/ShprinkOne (heads/master)

Update

The git submodule update will update your submodules according to the latest source tree (Root repository) known. It means that if your source tree is not up to date, you can end up using old versions of your submodules. This is pretty annoying, especially when working with a team.

If you want to be always up to date it means that your team needs to be rigorous on updating the source tree on every submodule’s commit…

After experiencing this problematic I decided to use the git submodule foreach command instead. It simply loop over your submodules and execute something.

update master on all submodules

$ git submodule foreach 'git pull origin master'

Commit

Now that you have worked on one or several submodules, it is time to commit your changes. to make sure you are ready to share your work run git status on all submodule:

$ git submodule foreach 'git status'
Entering 'lib/BttrLazyLoading'
# On branch master
# Changes not staged for commit:
#   (use "git add <file>..." to update what will be committed)
#   (use "git checkout -- <file>..." to discard changes in working directory)
#
#    modified:   newFile.js
#
Entering 'lib/ShprinkOne'
# On branch master
nothing to commit (working directory clean)

Add your new files

$ git submodule foreach 'git add --all'

Commit your changes

$ git submodule foreach 'git commit -m "your commit message" || true'

Using || true makes sure that the loop will not stop when a repository with nothing to commit is reached. Otherwise you will get the following error message:

Entering 'lib/ShprinkOne'
# On branch master
nothing to commit (working directory clean)
Stopping at 'lib/ShprinkOne'; script returned non-zero status.

Push to your remote

$ git submodule foreach 'git push origin master'

Sources

Introduction to Gulp.js with practical examples

What is Gulp.js?

Gulp.js is what we call a JavaScript Task Runner, it is Open Source and available on GitHub. It helps you automate repetitive tasks such as minification, compilation, unit testing, linting, etc. Gulp.js does not revolutionize automation but simplifies it tremendously.

Today the Web automation ecosystem is dominated by Grunt.js (which is a great tool BTW) but lately Gulp.js is getting more trending and soon will be overtaking Grunt.js (according to the GitHub popularity, aka “stars”: 7900 for Grunt.js and 6250 for Gulp.js).

How is it better than Grunt or Cakefile?

I would say it is no better nor worse than Grunt or Cakefile, it is different. While Cakefile or Grunt use files to execute tasks, Gulp.js uses streams. It means that a typical Cakefile or Grunt workflow would be to execute a task that dumps a temporary file, than based on this file to execute another task that dumps another temporary file an so on…

With Gulp.js everything happens on the fly using Node’s stream, temporary files are not needed anymore which make it easy to learn, use and enjoy.

Installation

Use the -g options to install Gulp globally on your machine.

sudo npm install -g gulp

Practical examples

Now let’s use Gulp a little bit. Throughout those examples we will discover how to use plugins to create specific tasks such as minifying, concatenate or even linting your code.

Source available on Github

HTML Minification

Using gulp-minify-html

npm install --save-dev gulp-minify-html

gulpfile.js:

// including plugins
var gulp = require('gulp')
, minifyHtml = require("gulp-minify-html");

// task
gulp.task('minify-html', function () {
	gulp.src('./Html/*.html') // path to your files
	.pipe(minifyHtml())
	.pipe(gulp.dest('path/to/destination'));
});

Run:

gulp minify-html

CSS Minification

Using gulp-minify-css

npm install --save-dev gulp-minify-css

gulpfile.js:

// including plugins
var gulp = require('gulp')
, minifyCss = require("gulp-minify-css");

// task
gulp.task('minify-css', function () {
	gulp.src('./Css/one.css') // path to your file
	.pipe(minifyCss())
	.pipe(gulp.dest('path/to/destination'));
});

Run:

gulp minify-css

JS Minification

Using gulp-uglify

npm install --save-dev gulp-uglify

gulpfile.js:

// including plugins
var gulp = require('gulp')
, uglify = require("gulp-uglify");

// task
gulp.task('minify-js', function () {
	gulp.src('./JavaScript/*.js') // path to your files
	.pipe(uglify())
	.pipe(gulp.dest('path/to/destination'));
});

Run:

gulp minify-js

CoffeeScript Compilation

Using gulp-coffee

npm install --save-dev gulp-coffee

gulpfile.js:

// including plugins
var gulp = require('gulp')
, coffee = require("gulp-coffee");

// task
gulp.task('compile-coffee', function () {
	gulp.src('./CoffeeScript/one.coffee') // path to your file
	.pipe(coffee())
	.pipe(gulp.dest('path/to/destination'));
});

Run:

gulp compile-coffee

Less Compilation

Using gulp-less

npm install --save-dev gulp-less

gulpfile.js:

// including plugins
var gulp = require('gulp')
, less = require("gulp-less");

// task
gulp.task('compile-less', function () {
	gulp.src('./Less/one.less') // path to your file
	.pipe(less())
	.pipe(gulp.dest('path/to/destination'));
});

Run:

gulp compile-less

Sass Compilation

Using gulp-sass

npm install --save-dev gulp-sass

gulpfile.js:

// including plugins
var gulp = require('gulp')
, sass = require("gulp-sass");

// task
gulp.task('compile-sass', function () {
	gulp.src('./Sass/one.sass') // path to your file
	.pipe(sass())
	.pipe(gulp.dest('path/to/destination'));
});

Run:

gulp compile-sass

ECMAScript 6 Compilation

Using gulp-babel

npm install --save-dev gulp-babel

gulpfile.js:

// including plugins
var gulp = require('gulp')
, babel = require("gulp-babel");

// task
gulp.task('compile-es6', function () {
	gulp.src('./ES6/one.es6.js')
        .pipe(babel())
	.pipe(gulp.dest('path/to/destination'));
});

Run:

gulp compile-es6

JavaScript Linting

Using gulp-jshint

npm install --save-dev gulp-jshint

gulpfile.js:

// including plugins
var gulp = require('gulp')
, jshint = require("gulp-jshint");

// task
gulp.task('jsLint', function () {
	gulp.src('./JavaScript/*.js') // path to your files
	.pipe(jshint())
	.pipe(jshint.reporter()); // Dump results
});

In case of success:

[gulp] Starting 'jsLint'...
[gulp] Finished 'jsLint' after 6.47 ms

In case of failure:

[gulp] Starting 'jsLint'...
[gulp] Finished 'jsLint' after 5.86 ms
/var/www/gulp-examples/JavaScript/two.js: line 3, col 15, Expected '}' to match '{' from line 3 and instead saw '['.
/var/www/gulp-examples/JavaScript/two.js: line 3, col 16, Missing semicolon.
/var/www/gulp-examples/JavaScript/two.js: line 3, col 154, Expected an assignment or function call and instead saw an expression.

3 errors

Run:

gulp jsLint

CoffeeScript Linting

Using gulp-coffeelint

npm install --save-dev gulp-coffeelint

gulpfile.js:

// including plugins
var gulp = require('gulp')
, coffeelint = require("gulp-coffeelint");

// task
gulp.task('coffeeLint', function () {
	gulp.src('./CoffeeScript/*.coffee') // path to your files
	.pipe(coffeelint())
	.pipe(coffeelint.reporter());
});

In case of success:

[gulp] Starting 'coffeeLint'...
[gulp] Finished 'coffeeLint' after 7.37 ms

In case of failure:

[gulp] Starting 'coffeeLint'...
[gulp] Finished 'coffeeLint' after 6.25 ms

one.coffee
?  line 3  Line contains a trailing semicolon

? 1 error

Run:

gulp coffeeLint

Rename a file

Using gulp-rename

npm install --save-dev gulp-rename

gulpfile.js:

// including plugins
var gulp = require('gulp')
, rename = require('gulp-rename')
, coffee = require("gulp-coffee");

// task
gulp.task('rename', function () {
	gulp.src('./CoffeeScript/one.coffee') // path to your file
	.pipe(coffee())  // compile coffeeScript
	.pipe(rename('renamed.js')) // rename into "renamed.js" (original name "one.js")
	.pipe(gulp.dest('path/to/destination'));
});

Run:

gulp rename

Concatenate files

Using gulp-concat

npm install --save-dev gulp-concat

gulpfile.js:

// including plugins
var gulp = require('gulp')
, concat = require("gulp-concat");

// task
gulp.task('concat', function () {
	gulp.src('./JavaScript/*.js') // path to your files
	.pipe(concat('concat.js'))  // concat and name it "concat.js"
	.pipe(gulp.dest('path/to/destination'));
});

Run:

gulp concat

Add copyright

Using gulp-header

npm install --save-dev gulp-header

Copyright

/*
Gulp Examples by @julienrenaux:

* https://github.com/shprink
* https://twitter.com/julienrenaux
* https://www.facebook.com/julienrenauxblog

Full source at https://github.com/shprink/gulp-examples

MIT License, https://github.com/shprink/gulp-examples/blob/master/LICENSE
*/

gulpfile.js:

// including plugins
var gulp = require('gulp')
, fs = require('fs')
, concat = require("gulp-concat")
, header = require("gulp-header");

// functions

// Get copyright using NodeJs file system
var getCopyright = function () {
	return fs.readFileSync('Copyright');
};

// task
gulp.task('concat-copyright', function () {
	gulp.src('./JavaScript/*.js') // path to your files
	.pipe(concat('concat-copyright.js')) // concat and name it "concat-copyright.js"
	.pipe(header(getCopyright()))
	.pipe(gulp.dest('path/to/destination'));
});

Run:

gulp concat-copyright

Add copyright with version

Using gulp-header and Node’s file system

npm install --save-dev gulp-header

Copyright

/*
Gulp Examples by @julienrenaux:

* https://github.com/shprink
* https://twitter.com/julienrenaux
* https://www.facebook.com/julienrenauxblog

Version: <%= version %>
Full source at https://github.com/shprink/gulp-examples

MIT License, https://github.com/shprink/gulp-examples/blob/master/LICENSE
*/

Version

1.0.0

gulpfile.js:

// including plugins
var gulp = require('gulp')
, fs = require('fs')
, concat = require("gulp-concat")
, header = require("gulp-header");

// functions

// Get version using NodeJs file system
var getVersion = function () {
	return fs.readFileSync('Version');
};

// Get copyright using NodeJs file system
var getCopyright = function () {
	return fs.readFileSync('Copyright');
};

// task
gulp.task('concat-copyright-version', function () {
	gulp.src('./JavaScript/*.js')
	.pipe(concat('concat-copyright-version.js')) // concat and name it "concat-copyright-version.js"
	.pipe(header(getCopyrightVersion(), {version: getVersion()}))
	.pipe(gulp.dest('path/to/destination'));
});

Run:

gulp concat-copyright-version

Mix them up (Lint, Concat, Compile, Minify etc.)

The purpose of this task is to mix the previous tasks into just one.

Copyright

/*
Gulp Examples by @julienrenaux:

* https://github.com/shprink
* https://twitter.com/julienrenaux
* https://www.facebook.com/julienrenauxblog

Version: <%= version %>
Full source at https://github.com/shprink/gulp-examples

MIT License, https://github.com/shprink/gulp-examples/blob/master/LICENSE
*/

Version

1.0.0

gulpfile.js:

// including plugins
var gulp = require('gulp')
, fs = require('fs')
, coffeelint = require("gulp-coffeelint")
, coffee = require("gulp-coffee")
, uglify = require("gulp-uglify")
, concat = require("gulp-concat")
, header = require("gulp-header");

// functions

// Get version using NodeJs file system
var getVersion = function () {
	return fs.readFileSync('Version');
};

// Get copyright using NodeJs file system
var getCopyright = function () {
	return fs.readFileSync('Copyright');
};

// task
gulp.task('bundle-one', function () {
	gulp.src('./CoffeeScript/*.coffee') // path to your files
	.pipe(coffeelint()) // lint files
	.pipe(coffeelint.reporter('fail')) // make sure the task fails if not compliant
	.pipe(concat('bundleOne.js')) // concat files
	.pipe(coffee()) // compile coffee
	.pipe(uglify()) // minify files
	.pipe(header(getCopyrightVersion(), {version: getVersion()})) // Add the copyright
	.pipe(gulp.dest('path/to/destination'));
});

Run:

gulp bundle-one

Tasks automation

Using gulp.watch you can easily automate any tasks when files are modified. It is really convenient because you do not have to run single tasks by hand every time a file is modified, and therefore your code is always up to date.

// including plugins
var gulp = require('gulp');

// task
gulp.task('watch-coffeescript', function () {
    gulp.watch(['./CoffeeScript/*.coffee'], ['compile-coffee']);
});

Run:

Now run the task:

gulp watch-coffeescript

and see what happens when you modify one of the source file.

How to setup name-based Virtual Hosts on Ubuntu 13.10 and Apache 2.4.6

Let’s imagine you have a project named myvhost-www within the folder /var/www. If you want to access it through your browser the URL would be http://localhost/myvhost-www/

Would not it be better to use the URL myvhost.com? To do so you will need to use a name-based Virtual Host.

With name-based virtual hosting, the server relies on the client to report the hostname as part of the HTTP headers. Using this technique, many different hosts can share the same IP address (which is the case for your personal computer).

Step 1: Create a Virtual Host configuration file

Create the file myvhost.conf within /etc/apache2/sites-available folder.

The configuration file MUST have a .conf extension
$ cd /etc/apache2/sites-available
# create an empty file
$ sudo touch myvhost.conf
# Edit it
$ sudo vi myvhost.conf

Within this file insert the following content:

<VirtualHost *:80>
        ServerName myvhost.com
        DocumentRoot "/var/www/myvhost-www"
</VirtualHost>
Inside each block, you will need at minimum a ServerName directive to designate which host is served and a DocumentRoot directive to show where in the filesystem the content for that host lives. In the normal case where any and all IP addresses on the server should be used, you can use * as the argument to NameVirtualHost (In this usual case, this will be “*:80”).

Step 2: Enable the Virtual Host

# Enabling site myvhost
$ sudo a2ensite myvhost.conf

# To activate the new configuration, you need to run:
$ sudo service apache2 reload

If you go check on the sites-enabled folder you will see that your new virtual host is enabled and point to the file we created previously.

$ cd /etc/apache2/sites-enabled
$ ll
lrwxrwxrwx 1 root root   31 Mar  8 18:47 myvhost.conf -> ../sites-available/myvhost.conf

Step 3: Register the Virtual Host to the hosts file

The hosts file is a computer file used by an operating system to map hostnames to IP addresses.

$ sudo vi /etc/hosts

Add the following line to this file:

127.0.0.1       myvhost.com
Entries in the hosts file will have precedence over DNS by default which means that even if myvhost.com domain name exist it will not attempt to look up the record in DNS.

Step 4: Enable mod_rewrite

mod_rewrite provides a way to modify incoming URL requests, dynamically, based on regular expression rules. This allows you to map arbitrary URLs onto your internal URL structure in any way you like.

sudo a2enmod rewrite

You have now configured a Virtual Host and are able to point to myvhost.com and see the same content as on http://localhost/myvhost-www/

Sources

Leading the SVN to Git migration

Once upon a time I started a new challenge within a new company. This company grew up so fast that the development processes and tools were not suitable anymore. Most projects started with only one developer and, at the time the company chose Subversion (SVN) as the main Concurrent Versions System (CVS) and a classic Waterfall process (linear) to manage projects. This choices froze things for years..

Coming from a large team (10 members) in a totally Agile environment the gap I faced was kind of huge. The lack of productivity that the company faced resulted from those ancient choices. A colleague and I decided to change things by introducing several development tools such as Git, Gitlab (GitHub alike), Jenkins (Continuous Integration), Composer (Dependency Manager for PHP) and Scrum (Agile development framework).

The SVN to GIT transition can be pretty hard to handle, depending on the size of your company, the number of repositories at stake and the number branches, tags created…

To avoid losing time and money, things have to be well prepared. In this post I will share my experience and some tricks to lead successfully the SVN to GIT transition.

Why companies still use Subversion?

Despite the popularity of GIT nowadays there is still a tremendous amount of company that still use Subversion

Git is not better than Subversion. But is also not worse. It’s different. There are many reasons possible why companies still use Subversion:

  1. Subversion’s initial release was in 2000 while Git was in 2005. Any Software Company created before 2005 might therefore have used Subversion (It is the case of eBay, inc. for example)
  2. Subversion is less complex and suit better developers working alone. Indeed Git adds complexity. Two modes of creating repositories, checkout vs. clone, commit vs. push… You have to know which commands work locally and which work with “the remote”
  3. The company grew up too quickly: The lack of time and money that most start-ups face can lead to underestimating developers needs and making unreasonable architecture choices.
  4. The fear of changing well explained by Kurt Lewin being a three-stage process:
    1. Unfreezing: dismantling the existing “mind set”, defense mechanisms have to be bypassed
    2. Movement: When the change occurs, this is typically a period of confusion and transition
    3. Freezing: The new mindset is crystallizing and one’s comfort level is returning to previous levels

What does Git do better than SVN?

I invite you to watch this video about Linus Torvalds (Git creator and Linux project’s coordinator) tech talk at Google.

If you like using CVS, you should be in some kind of mental institution or somewhere else. Linus Torvalds

Git is Scalable

Git is perfect to handle medium to large projects because it is scalable, the more files and developers involved in your project the more you can leverage from it.

Git is Distributed

Git is a DVCS (Distributed Version Control System), meaning that every user has a complete copy of the repository data (including history) stored locally (While SVN has one Central repository). Developers can therefore commit off-line, and synchronize their work to distant repositories when back online.

Git branching and merging support is a lot better

SVN isn’t branch-centric while Git is designed around the idea of branching. Making branches, using branches, merging two branches together, merging three branches together, merging branches from local and remote repositories together – Git is good at branches.

Git Flow

Git-Flow is a set of git extensions that handle most high-level repository operations (feature, hotfix and release) based on Vincent Driessen’s branching model. I recommend using this tool but only if you understand Git basic commands (git branch, git checkout, git pull etc.) otherwise in some cases you will get lost.

OSX Installation
$ brew install git-flow
Linux Installation
$ apt-get install git-flow
Windows (Cygwin) Installation
$ wget -q -O - --no-check-certificate https://github.com/nvie/gitflow/raw/develop/contrib/gitflow-installer.sh | bash

Git is Speed

Git operations are much faster than on SVN. All operations (except for push and pull) are done locally, there is therefore no network latency involved for most of the daily routine commands (git diff, git log, git commit, git branch, git merge etc.). A Git repository is also around 30x smaller than a SVN repository which is not negligible when cloning (backup).

Preparation

Teaching

Being on GitHub does not mean being competent at GIT!

Teaching Git to developers is essential. Git can be hard to learn, especially if you are used to SVN. You need to insist on what’s better than SVN and the things that you now can do with Git that you could not with SVN. I have seen developers on Github that were lost using Git in a work environment, being on GitHub does not mean being competent at GIT!

Here is some of the slides I presented to developers:

[iframely]http://www.slideshare.net/julienrenaux/git-presentation-31666204[/iframely]

Creating a authors.txt file to map SVN users to Git users.

SVN and Git do not store the same way the developer identity when committing. SVN stores the username while Git stores the full name and the email address.

Therefore prior to migrating to Git, you need to create an author mapping file that has the following format:

fdeveloper  = First Developer <first.developer@company.com>
sdeveloper = Second Developer <second.developer@company.com>
tdeveloper  = Third Developer <third.developer@company.com>
etc..

If you have missed this step and already migrated to Git, don’t worry, we still can rewrite history using the command git filter-branch!

git filter-branch --commit-filter '
        if [ "$GIT_COMMITTER_NAME" = "fdeveloper" ]; // SVN username
        then
                GIT_COMMITTER_NAME="First Developer";
                GIT_AUTHOR_NAME="First Developer";
                GIT_COMMITTER_EMAIL="first.developer@company.com";
                GIT_AUTHOR_EMAIL="first.developer@company.com";
                git commit-tree "$@";
        else
                git commit-tree "$@";
        fi' HEAD

This command will go through every single commit and if necessary change the developer information.

Using this command could be pretty slow depending of the number of commit involved.

Migrating

git svn is a simple conduit for changesets between Subversion and git. It provides a bidirectional flow of changes between a Subversion and a git repository.

Clone

For a complete transparent transition (importing commit history) you will need to use GitSvn

$ git svn clone -A ~/Desktop/authors.txt svn://IP@/Project/trunk .

You can also tell git svn not to include the metadata that Subversion normally imports, by passing –no-metadata

$ git svn clone -A ~/Desktop/authors.txt svn://IP@/Project/trunk . --no-metadata

Migrate tags

This takes the references that were remote branches that started with tag/ and makes them real (lightweight) tags.

$ git for-each-ref refs/remotes/tags | cut -d / -f 4- | grep -v @ | while read tagname; do git tag "$tagname" "tags/$tagname"; git branch -r -d "tags/$tagname"; done

Conclusion

Do not underestimate the unwillingness of your coworkers!

If you want to succeed your migration I suggest you spend a lot of time teaching Git to developers, confront them with real cases and things that Git do better than SVN, do not underestimate the unwillingness of your coworkers!

Once everybody is up to date with Git, then you can start working on your migration. If you use GitSvn things should be pretty smooth but do not forget to make backups, just in case 😉

Sources

How to automatically checkout the latest tag of a Git repository

Now that you know How to automatically tag a Git repository an interesting command to know about is “How to automatically checkout the latest tag of a Git repository”. Let’s imagine that you want to automatically release a stable version of your product (that have dependencies or not) based on the latest version available on your repository.

An easy way to do it is to follow the following steps:

# Get new tags from the remote
$ git fetch --tags

# Get the latest tag name
$ latestTag=$(git describe --tags `git rev-list --tags --max-count=1`)

# Checkout the latest tag
$ git checkout $latestTag

Xampp – Open SSL – Don’t know how to get public key from this private key

When enabling OpenSSL you might get this error message if you use Xampp on Windows. Follow this steps to resolve the problem:

  1. Stop your Apache.
  2. In the folder `xamppphp` you will find two DLL: libeay32.dll and ssleay32.dll. Copy them into the folder `xamppapachebin`
  3. Open the `xamppphpphp.ini` file and remove the semicolon in front of `extension=php_openssl.dll`
  4. Start you Apache

WordPress add_image_size cropping problem: How to enable GD library

While trying to enable the image resizing option in WordPress I figured that the cropping feature of the function below (located in /wp-includes/media.php) did not work on my localhost.

function add_image_size( $name, $width = 0, $height = 0, $crop = false )

Using the code below in functions.php I was able to resize images (really badly though) but unable to crop them.

//This feature enables post-thumbnail support for a Theme
add_theme_support( 'post-thumbnails' );
add_image_size( 'post-image',  708, 400, true );
add_image_size( 'post-image-large',  708, 200, true );
add_image_size( 'post-image-small',  330, 150, true );

After some research I found a solution. In order to solve the cropping problem you will need to install the GD Library. GD is an open source code library for the dynamic creation of images. It supports JPG, PNG and XPM formats. Here is how to install it:

For Windows

All you need to do is to uncomment (remove the semi-colon at the beginning of the line) the line below on your php.ini

;extension=php_gd2.dll

Once it is done save it and do not forget to restart your Apache server.

For Linux

At first I tried to follow the Windows step but I got this error message:

Warning: PHP Startup: Unable to load dynamic library
'/opt/lampp/lib/php/extensions/no-debug-non-zts-20090626/php_gd2.dll'
 - /opt/lampp/lib/php/extensions/no-debug-non-zts-20090626/php_gd2.dll:
 cannot open shared object file: No such file or directory in Unknown on line 0

For Linux it is a little bit more complicated than only uncomment a line. To get it work you will have to install the library with the line below.

apt-get install php5-gd

Then restart Apache and it should work fine.

/etc/init.d/apache2 restart

or if you are using LAMPP/XAMPP

/opt/lampp/lampp restart

Once you have done all of this cropping should work for newly uploaded images. Unfortunetly you will need to re-process old pictures. To do so a great solution is to use the Regenerate Thumbnails plugins for WordPress.

Sources: