Here at Agaric we work a lot with install profiles and, more often than not, we have to provide default content. This is mostly taxonomy terms, menu links, and sometimes even nodes with fields. Recently, I have started to use Migrate to load that data from JSON files.
Migrate is usually associated with importing content from a legacy website into Drupal, either from a database or files. Loading initial data is just a special case of a migration. Because it handles many kinds of data sources with a minimum of configuration effort, Migrate is well suited for the task.
Here is an example from our project Find It Cambridge. It is a list of terms I would like to add to a vocabulary, stored in a JSON file.
[ { "term_name": "Braille", "weight": 0 }, { "term_name": "Sign language", "weight": 1 }, { "term_name": "Translation services provided", "weight": 2 }, { "term_name": "Wheelchair accessible", "weight": 3 } ]
If you do not need a particular order for the terms in the vocabularies, you can skip the weight definition. In the next code snippet we specify a default value of 0 for the weight. In such a case, Drupal will list the terms alphabetically.
Migrate does almost all the work for us— we just need to create a Migration class and configure it using the constructor. For a single JSON file the appropriate choice for the source is MigrateJSONSource
. The destination is an instance of MigrateDestinationTerm
. Migrate requires a data source to have a primary key which is provided via MigrateSQLMap
. In this case term_name is defined as the primary key:
class TaxonomyTermJSONMigration extends Migration { public function __construct($arguments) { parent::__construct($arguments); $this->map = new MigrateSQLMap( $this->machineName, array( 'term_name' => array( 'type' => 'varchar', 'length' => 255, 'description' => 'The term name.', 'not null' => TRUE, ), ), MigrateDestinationTerm::getKeySchema() ); $this->source = new MigrateSourceJSON($arguments['path'], 'term_name'); $this->destination = new MigrateDestinationTerm($arguments['vocabulary']); $this->addFieldMapping('name', 'term_name'); $this->addFieldMapping('weight', 'weight')->defaultValue(0); } }
This migration class expects to find the vocabulary machine name and the location of the JSON file in the $arguments
parameter of the constructor. Those parameters are passed to Migration::registerMigration
. Registration and processing can be handled during installation of the profile. Because there are several vocabularies to populate I have defined a function:
function findit_vocabulary_load_terms($machine_name, $path) { Migration::registerMigration('TaxonomyTermJSONMigration', $machine_name, array( 'vocabulary' => $machine_name, 'path' => $path, )); $migration = Migration::getInstance($machine_name); $migration->processImport(); }
This function is called in our profile's implementation of hook_install
with the path and vocabulary machine name for each vocabulary. The file is stored at profiles/findit/accessibility_options.json relative to the Drupal installation directory. The following snippet is an extract from our install profile that demonstrates creating the vocabulary and using above function to add the terms.
function findit_install() { … $vocabularies = array( … 'accessibility_options' => st('Accessibility'), … ); … foreach ($vocabularies as $machine_name => $name) { findit_create_vocabulary($name, $machine_name); findit_vocabulary_load_terms($machine_name, dirname(__FILE__) . "/data/" . $machine_name.json); } … }
Executing drush site-install findit
will set up content types, vocabularies, and create the taxonomy terms.
In the past I have used Drupal's API to create taxonomy terms, menu links, and other content, which also works well and does not add the mental overhead of another tool. But the Migrate approach has one key benefit in my opinion: it provides a well defined way of separating data from the means to import it and enables the developer to easily handle more complex tasks like nodes with fields. Compare the above approach of importing taxonomy terms to the following equivalent code:
$terms = array( array('vocabulary_machine_name' => 'accessibility_options', 'name' => 'Braille', 'weight' => 0), array('vocabulary_machine_name' => 'accessibility_options', 'name' => 'Sign language', 'weight' => 1), array('vocabulary_machine_name' => 'accessibility_options', 'name' => 'Translation services provided', 'weight' => 2), array('vocabulary_machine_name' => 'accessibility_options', 'name' => 'Wheelchair accessible', 'weight' => 3), ); foreach ($terms as $term_data) { $term = (object) $term_data taxonomy_term_save($term); }
Even though one can write the code in a style that takes care of separating code and data, it gets more complicated with less intuitive APIs. My preference is to have content in a separate file and rely on a well tested tool for importing it. Using Migrate and JSON files is a convenient and powerful solution to this end. What is your approach to providing default content?
Comments
2015 October 28
Emmanuel
Thanks
Interesting, thanks for the write-up.
2015 October 30
rodrigoa guilera
I've been using a similar
I've been using a similar approach with yaml to define the structure and CSV to store the data.
https://github.com/Ymbra/migrate_default_content
2015 November 01
Capi Etheriel
Hooks
Are the node_save or term_save hooks called when importing using Migrate?
2015 November 02
Stefan Freudenberg
Re: Hooks
The destination classes for nodes and terms provided by Migrate, create (and update) the entities by calling
node_save
,term_save
etc., so all hooks invoked by those functions are called. Migrate also gives you the option to disable certain hooks. In the above example the array of hooks to disable would be passed toregisterMigration
as part of the arguments array with the key disable_hooks.Add new comment