While working on different projects and in different environments, we often need to export a dump from one database and then import it into another. A while ago Slobodan wrote how to export and import a mySQL dump, and here is a guide how do it for PostgreSQL.
Export a PostgreSQL database dump
To export PostgreSQL database we will need to use the pg_dump tool, which will dump all the contents of a selected database into a single file.
We need to run
pg_dump in the command line on the computer where the database is stored. So, if the database is stored on a remote server, you will need to SSH to that server in order to run the following command:
Here we used the following options:
-Uto specify which user will connect to the PostgreSQL database server.
--passwordwill force pg_dump to prompt for a password before connecting to the server.
-Fis used to specify the format of the output file, which can be one of the following:
p– plain-text SQL script
c– custom-format archive
d– directory-format archive
t– tar-format archive
custom, directory and tar formats are suitable for input into pg_restore.
To see a list of all the available options use
With given options
pg_dump will first prompt for a password for the database user
db_user and then connect as that user to the database named
db_name. After it successfully connects,
> will write the output produced by pg_dump to a file with a given name, in this case
File created in the described process contains all the SQL queries that are required in order to replicate your database.
Import a PostgreSQL database dump
There are two ways to restore a PostgreSQL database:
psqlfor restoring from a plain SQL script file created with
pg_restorefor restoring from a .tar file, directory, or custom format created with
1. Restore a database with psql
If your backup is a plain-text file containing SQL script, then you can restore your database by using PostgreSQL interactive terminal, and running the following command:
db_user is the database user,
db_name is the database name, and
dump_name.sql is the name of your backup file.
2. Restore a database with pg_restore
If you choose custom, directory, or archive format when creating a backup file, then you will need to use pg_restore in order to restore your database:
pg_restore -d db_name /path/to/your/file/dump_name.tar -c -U db_user
If you use pg_restore you have various options available, for example:
-cto drop database objects before recreating them,
-Cto create a database before restoring into it,
-eexit if an error has encountered,
-F formatto specify the format of the archive.
pg_restore -? to get the full list of available options.
You can find more info on using mentioned tools by running
man psql and
Starting with v9.2, PostgreSQL added native JSON support which enabled us to take advantage of some benefits that come with NoSQL database within a traditional relational database such as PostgreSQL.
While working on a Ruby on Rails application that used PostgreSQL database to store data, we came a across an issue where we needed to implement a search by key within a JSON column.
We were alredy using Ransack for building search forms within the application, so we needed a way of telling Ransack to perform a search by given key in our JSON column.
This is where Ransackers come in.
The premise behind Ransack is to provide access to Arel predicate methods.
You can find more information on Arel here.
In our case we needed to perform a search within
transactions table and
payload JSON column, looking for records containing a key called
invoice_number. To achieve this we added the following ransacker to our
1 2 3
Now with our search set on
link_type_cont (cont being just one of Ransack available search predicates), if the user entered for example
123 in the search filed, it would generate a query like this:
basically performing a search for records in
transactions table that have a key called
invoice_number with value containing a string
123, within a JSON column
I recently worked on a Rails project, which had parts of pages in different languages. That may be a problem if you have already translated their entire text to all required languages. You can even be tempted to hardcode parts of the text into other languages. Fortunately, there is an elegant way to solve that problem, just wrap parts of template or partials into blocks with desired locale, like this:
1 2 3 4 5
Suppose, there is a template with only header and two paragraphs.
1 2 3 4 5
And locale in English and French for that template.
1 2 3 4 5 6 7 8 9 10 11
In the lifetime of every application the time comes for it to be presented to everyone. That’s why we have to put our application on a special server which is designed for this purpose. In one word, we need to deploy our application. In this post you will see how to deploy app with Capistrano 3.
Capistrano is a great developers tool that is used to automatically deploy projects to remote server.
Add Capistrano to Rails app
I will assume you already have a server set up and an application ready to be deployed remotely.
We will use gem ‘capistrano-rails’, so we need to add this gems to Gemfile:
1 2 3 4
and install gems with
$ bundle install.
Then run the following command to create configuration files:
This command creates all the necessary configuration files and directory structure with two stages, staging and production:
1 2 3 4 5
Sooner or later every new Ruby developer needs to understand differences between this two common rake tasks. Basically, these simple definition tells us everything we need to know:
rake db:migrateruns migrations that have not run yet
rake db:schema:loadloads the schema.db file into database.
but the real question is when to use one or the other.
Advice: when you are adding a new migration to an existing app then you need to run
rake db:migrate, but when you join to existing application (especially some old application), or when you drop your applications database and you need to create it again, always run
rake db:schema:load to load schema.
I am working on application which use globalize gem for ActiveRecord model/data translations. Globalize work this way:
- first specify attributes which need to be translatable
1 2 3
If you use Vagrant, VirtualBox and Ubuntu to build your Rails apps and you want to test it with Cucumber scenarios, this is the right post for you. By default Vagrant and VirtualBox use Ubuntu without an X server and GUI.
Everything goes well until you need
Install Mozilla Firefox
Selenium WebDriver is flexible and lets you run selenium headless in servers with no display. But in order to run, Selenium needs to launch a browser. If there is no display to the machine, the browsers are not launched. So in order to use selenium, you need to fake a display and let selenium and the browser think they are running in a machine with a display.
Install latest version of Mozilla Firefox:
sudo apt-get install firefox
Since Ubuntu is running without a X server Selenium cannot start Firefox because it requires an X server.
Setting up virtual X server
Virtual X server is required to make browsers run normally by making them believe there is a display available, although it doesn’t create any visible windows.
Another simple task that’s often hard for beginners is importing and exporting MySQL dumps. Here is quick rundown on how to do it.
To export data you need to use
Options given to
-u db_user– connect as user
-p– use password, it will ask you to enter your password
db_nameis the name of MySQL database you want to dump
> dump_name.sql– by default
mysqldumpwill print out the dump to terminal, but simple output redirect with
>will instead write it to given filename, in this case
Now that you have
dump_name.sql file with all SQL queries needed to replicate your database you can import it using general-purpose
User, password, and database name options are the same as for
mysql reads input from terminal this time we can use
< to read input from given file instead.
As always for more information you can consult manual using
man mysqldump and
One of the simplest tasks is creating and extracting files using
gzip. Yet for most new developers this is a daunting task. These days
tar is mostly used to simply combine a few files into a single file and then
gzip is used to compress that file.
Here is a quick overview how to use
gzip to create and compress an archive:
1 2 3 4 5 6 7 8
Options give to tar are:
c to create new archive,
v to be verbose,
z to compress resulting archive with
f to write the archive to specified file. After options you can list files and dirs you want to archive.
In all examples we provide a full path to a file or dir we want to archive. In this case
tar will store files in the archive using the full path. This means once you extract the files you’ll have a complete directory structure from root dir onwards.
The way to avoid this is either to manually
cd to dir in which files are stored, or to tell
C option to change dir before archiving files.
Finally to extract an archive:
x option tells
tar to extract the archive into current directory.
For more information you can consult manual using
A gem is a simple way to distribute functionality, it can be a small plugin, a Ruby library or sometimes a whole program. Thanks to RubyGems, a gem hosting service, developers have a wide range of gems at their disposal allowing them to easily add functionality to their applications.
But what if there is no gem available that will suit the functionality you need, and you find yourself writing the same code over and over again for different projects? Well, in that case you should consider making your own gem.
It’s considered a good practice to extract a gem out of an existing application, since that way you will have a better understanding of all the requirements as well as how the gem will be used. This blog post will illustrate just that on a real life example, and will take you through the process of creating a slug_converter gem.
For our new project it was necessary to modify the starting id of our database. This can be handled through migration for creating table but we decided to create a rake task that handled this for us.
The rake task that we created detects what database is being used and executes appropriate changes according to that.