While working on different projects and in different environments, we often need to export a dump from one database and then import it into another. A while ago Slobodan wrote how to export and import a mySQL dump, and here is a guide how do it for PostgreSQL.

Export a PostgreSQL database dump

To export PostgreSQL database we will need to use the pg_dump tool, which will dump all the contents of a selected database into a single file. We need to run pg_dump in the command line on the computer where the database is stored. So, if the database is stored on a remote server, you will need to SSH to that server in order to run the following command:

pg_dump -U db_user -W -F t db_name > /path/to/your/file/dump_name.tar

Here we used the following options:

  • -U to specify which user will connect to the PostgreSQL database server.
  • -W or --password will force pg_dump to prompt for a password before connecting to the server.
  • -F is used to specify the format of the output file, which can be one of the following:
    • p – plain-text SQL script
    • c – custom-format archive
    • d – directory-format archive
    • t – tar-format archive

custom, directory and tar formats are suitable for input into pg_restore.

To see a list of all the available options use pg_dump -?.

With given options pg_dump will first prompt for a password for the database user db_user and then connect as that user to the database named db_name. After it successfully connects, > will write the output produced by pg_dump to a file with a given name, in this case dump_name.tar.

File created in the described process contains all the SQL queries that are required in order to replicate your database.

Import a PostgreSQL database dump

There are two ways to restore a PostgreSQL database:

  1. psql for restoring from a plain SQL script file created with pg_dump,
  2. pg_restore for restoring from a .tar file, directory, or custom format created with pg_dump.

1. Restore a database with psql

If your backup is a plain-text file containing SQL script, then you can restore your database by using PostgreSQL interactive terminal, and running the following command:

psql -U db_user db_name < dump_name.sql

where db_user is the database user, db_name is the database name, and dump_name.sql is the name of your backup file.

2. Restore a database with pg_restore

If you choose custom, directory, or archive format when creating a backup file, then you will need to use pg_restore in order to restore your database:

pg_restore -d db_name /path/to/your/file/dump_name.tar -c -U db_user

If you use pg_restore you have various options available, for example:

  • -c to drop database objects before recreating them,
  • -C to create a database before restoring into it,
  • -e exit if an error has encountered,
  • -F format to specify the format of the archive.

Use pg_restore -? to get the full list of available options.

You can find more info on using mentioned tools by running man pg_dump, man psql and man pg_restore.

Starting with v9.2, PostgreSQL added native JSON support which enabled us to take advantage of some benefits that come with NoSQL database within a traditional relational database such as PostgreSQL.

While working on a Ruby on Rails application that used PostgreSQL database to store data, we came a across an issue where we needed to implement a search by key within a JSON column.

We were alredy using Ransack for building search forms within the application, so we needed a way of telling Ransack to perform a search by given key in our JSON column.

This is where Ransackers come in.

The premise behind Ransack is to provide access to Arel predicate methods.

You can find more information on Arel here.

In our case we needed to perform a search within transactions table and payload JSON column, looking for records containing a key called invoice_number. To achieve this we added the following ransacker to our Transaction model

ransacker :invoice_number do |parent|
   Arel::Nodes::InfixOperation.new('->>', parent.table[:payload], 'invoice_number')

Now with our search set on link_type_cont (cont being just one of Ransack available search predicates), if the user entered for example 123 in the search filed, it would generate a query like this:

SELECT  "transactions".* FROM "transactions"  WHERE ("transactions"."payload" ->> 'invoice_number' ILIKE '%123%')

basically performing a search for records in transactions table that have a key called invoice_number with value containing a string 123, within a JSON column payload.

I recently worked on a Rails project, which had parts of pages in different languages. That may be a problem if you have already translated their entire text to all required languages. You can even be tempted to hardcode parts of the text into other languages. Fortunately, there is an elegant way to solve that problem, just wrap parts of template or partials into blocks with desired locale, like this:

<% I18n.with_locale('en') do %>
  ...part of your template
  <%= render partial: 'some/partial' %>
<% end %>


Suppose, there is a template with only header and two paragraphs.

<h1><%= t('my_great_header') %></h1>

<p><%= t('first_paragraph') %></p>

<p><%= t('second_paragraph') %></p>

And locale in English and French for that template.

# in config/locales/en.yml
  my_great_header: "My English great header"
  first_paragraph: "First English paragraph"
  second_paragraph: "Second English paragraph"

# in config/locales/fr.yml
  my_great_header: "My French great header"
  first_paragraph: "First French paragraph"
  second_paragraph: "Second French paragraph"

In the lifetime of every application the time comes for it to be presented to everyone. That’s why we have to put our application on a special server which is designed for this purpose. In one word, we need to deploy our application. In this post you will see how to deploy app with Capistrano 3.

Capistrano is a great developers tool that is used to automatically deploy projects to remote server.

Add Capistrano to Rails app

I will assume you already have a server set up and an application ready to be deployed remotely.

We will use gem ‘capistrano-rails’, so we need to add this gems to Gemfile:

group :development do
  gem 'capistrano', '~> 3.5'
  gem 'capistrano-rails', '~> 1.1.6'

and install gems with $ bundle install.

Initialize Capistrano

Then run the following command to create configuration files:

$ bundle exec cap install

This command creates all the necessary configuration files and directory structure with two stages, staging and production:


Sooner or later every new Ruby developer needs to understand differences between this two common rake tasks. Basically, these simple definition tells us everything we need to know:

  • rake db:migrate runs migrations that have not run yet
  • rake db:schema:load loads the schema.db file into database.

but the real question is when to use one or the other.

Advice: when you are adding a new migration to an existing app then you need to run rake db:migrate, but when you join to existing application (especially some old application), or when you drop your applications database and you need to create it again, always run rake db:schema:load to load schema.


I am working on application which use globalize gem for ActiveRecord model/data translations. Globalize work this way:

  • first specify attributes which need to be translatable
class Post < ActiveRecord::Base
  translates :title, :text

If you use Vagrant, VirtualBox and Ubuntu to build your Rails apps and you want to test it with Cucumber scenarios, this is the right post for you. By default Vagrant and VirtualBox use Ubuntu without an X server and GUI.

Everything goes well until you need @javascript flag for your cucumber scenario. @javascript uses a javascript-aware system to process web requests (e.g. Selenium) instead of the default (non-javascript-aware) webrat browser.

Install Mozilla Firefox

Selenium WebDriver is flexible and lets you run selenium headless in servers with no display. But in order to run, Selenium needs to launch a browser. If there is no display to the machine, the browsers are not launched. So in order to use selenium, you need to fake a display and let selenium and the browser think they are running in a machine with a display.

Install latest version of Mozilla Firefox:

sudo apt-get install firefox

Since Ubuntu is running without a X server Selenium cannot start Firefox because it requires an X server.

Setting up virtual X server

Virtual X server is required to make browsers run normally by making them believe there is a display available, although it doesn’t create any visible windows.

Another simple task that’s often hard for beginners is importing and exporting MySQL dumps. Here is quick rundown on how to do it.

To export data you need to use mysqldump:

mysqldump -u db_user -p db_name > dump_name.sql

Options given to mysqldump are:

  • -u db_user – connect as user db_user to database
  • -p – use password, it will ask you to enter your password
  • db_name is the name of MySQL database you want to dump
  • > dump_name.sql – by default mysqldump will print out the dump to terminal, but simple output redirect with > will instead write it to given filename, in this case dump_name.sql

Now that you have dump_name.sql file with all SQL queries needed to replicate your database you can import it using general-purpose mysql client:

mysql -u db_user -p db_name < dump_name.sql

User, password, and database name options are the same as for mysqldump. Since mysql reads input from terminal this time we can use < to read input from given file instead.

As always for more information you can consult manual using man mysqldump and man mysql.

One of the simplest tasks is creating and extracting files using tar and gzip. Yet for most new developers this is a daunting task. These days tar is mostly used to simply combine a few files into a single file and then gzip is used to compress that file.

Here is a quick overview how to use tar and gzip to create and compress an archive:

# archive individual files
tar -cvzf myarchive.tar.gz /path/to/file1 /path/to/file2

# archive whole directory
tar -cvzf myarchive.tar.gz /path/to/dir

# archive whole directory but don't store full path
tar -cvzf myarchive.tar.gz -C /path/to/dir ./

Options give to tar are: c to create new archive, v to be verbose, z to compress resulting archive with gzip, and f to write the archive to specified file. After options you can list files and dirs you want to archive.

In all examples we provide a full path to a file or dir we want to archive. In this case tar will store files in the archive using the full path. This means once you extract the files you’ll have a complete directory structure from root dir onwards.

The way to avoid this is either to manually cd to dir in which files are stored, or to tell tar using C option to change dir before archiving files.

Finally to extract an archive:

tar -xvzf myarchive.tar.gz

The x option tells tar to extract the archive into current directory.

For more information you can consult manual using man tar.

A gem is a simple way to distribute functionality, it can be a small plugin, a Ruby library or sometimes a whole program. Thanks to RubyGems, a gem hosting service, developers have a wide range of gems at their disposal allowing them to easily add functionality to their applications.

But what if there is no gem available that will suit the functionality you need, and you find yourself writing the same code over and over again for different projects? Well, in that case you should consider making your own gem.

It’s considered a good practice to extract a gem out of an existing application, since that way you will have a better understanding of all the requirements as well as how the gem will be used. This blog post will illustrate just that on a real life example, and will take you through the process of creating a slug_converter gem.

For our new project it was necessary to modify the starting id of our database. This can be handled through migration for creating table but we decided to create a rake task that handled this for us.

The rake task that we created detects what database is being used and executes appropriate changes according to that.