Sign up to be notified when Agaric gives a migration training:
You have used Rabbit Hole module to prevent people from being able to directly visit to select content items, but when you include those items in a reference list Drupal is still linking to them.
Here is one way to fix that.
This presumes you are listing nodes, taxonomy terms, or other content entities with some that are set to not allow direct visiting, but that the others should be linked. Essentially, if you have configured a content type, vocabulary, or other bundle in Rabbit Hole to Allow these settings to be overridden for individual entities, and are using this capability, you may be in a situation where item A links to its page, item B should not because that page will give access denied or page not found, and item C perhaps should link to its page, etc.
First, if the list is shown as labels, Rabbit Hole Links module is all you need. Install it, enable it, you are done— no configuration needed, and no need for the rest of this blog post.
However, if you are listing content rendered as view modes, that module will not handle the links for you. In our case, we have a "Person" content type and are using Entity Reference Override module to show a varied project role alongside the person's name. As some of the people should be linked, and others should not have standalone pages, we needed a way to deactivate links based on the per-node Rabbit Hole settings.
This approach follows the path laid out by Daniel Flanagan of Horizontal using bundle classes to provide control over node URLs. Laying out that path for us all to follow also involved Daniel building several bridges, in the form of four patches (which all apply to the current stable versions of these modules, but work is needed to keep up with the latest development branches and get these patches in):
As Daniel Flanagan's blog post also helpfully provides, here are the composer patches that need to go in the patches
section (under extra
) of your project's composer.json
file:
"patches": {
"drupal/core": {
"Path module calls getInternalPath without checking if the url is routed": "https://www.drupal.org/files/issues/2023-05-03/3342398-18-D9.patch"
},
"drupal/pathauto": {
"deleteEntityPathAll calls getInternalPath on potentially unrouted url": "https://www.drupal.org/files/issues/2023-05-03/pathauto-unrouted-3357928-2.patch",
"Alias should not get created if system path is <nolink>": "https://www.drupal.org/files/issues/2023-06-15/pathauto-nolink-3367043-2.patch"
},
"drupal/redirect": {
"redirect_form_node_form_alter calls getInternalPath on potentially unrouted url": "https://www.drupal.org/files/issues/2023-02-16/redirect-unrouted-3342409-2.patch"
},
},
And with that, you are ready for the code!
First, you may need to make the link conditional on a URL existing in your template. Here is our node--person--project-details-team.html.twig
for showing the person's name (node title) linked to their page (if applicable) followed by their project role (if any, we made this a field not shown on the Person edit form but only in the Entity Reference Override widget form used when the person is referenced from a project).
{% if url %}<a href="{{ url }}">{% endif %}{{ label }}{% if url %}</a>{% endif %}{% if node.field_project_title.value %}{{- content.field_project_title -}}{% endif %}
In a custom module, called here example
module, first in you example.module
file have your bundle class take over for the specified content type:
<?php
/**
* Implements hook_entity_bundle_info_alter().
*/
function example_entity_bundle_info_alter(array &$bundles): void {
if (isset($bundles['node']['person'])) {
$bundles['node']['person']['class'] = \Drupal\example\Entity\Bundle\PersonNode::class;
}
}
In your same custom module, make file at src/Entity/Bundle/PersonNode.php
:
<?php
namespace Drupal\example\Entity\Bundle;
use Drupal\node\Entity\Node;
use Drupal\Core\Url;
/**
* A bundle class for node entities.
*/
class PersonNode extends Node {
/**
* Use contents of field_link as canonical url.
*
* {@inheritdoc}
*/
public function toUrl($rel = 'canonical', array $options = []) {
if ($rel === 'canonical' && $this->hasField('rabbit_hole__settings')) {
if ($this->get('rabbit_hole__settings')->action === 'access_denied') {
return new Url('<nolink>');
}
}
return parent::toUrl($rel, $options);
}
/**
* {@inheritdoc}
*
* This is important to avoid accidentally having pathauto delete all url aliases.
* @see https://www.drupal.org/project/pathauto/issues/3367067
*/
public function hasLinkTemplate($rel) {
if ($rel === 'canonical' && $this->hasField('rabbit_hole__settings')) {
if ($this->get('rabbit_hole__settings')->action === 'access_denied') {
return FALSE;
}
}
return parent::hasLinkTemplate($rel);
}
}
The above works for our setup (where we only set individual items to access denied, and we know the default will always be to show the page), but the way we get the settings for rabbit hole should be updated and follow the approach in Rabbit Hole Links, which would be more robust.
Overriding this in a way a contributed module could do (so not bundle classes) should be possible but the need to unset canonical links to use uri_callback
and then reproduce the canonical link logic for all bundles that you do not want to affect—all while Drupal wants to deprecate uri_callback in routes for entities—makes this a not-fun contrib experience. But possibly Rabbit Hole Links module will be up for the task!
I'd also love to hear from Typed Entity aficionados if this is possible with that alternative to bundle classes, and if so how!
In a previous article we explained the syntax used to write Drupal migration. We also provided references of subfields and content entities' properties including those provided by the Commerce module. This time we are going to list the configuration options of many migrate source plugins. For example, when importing from a JSON file you need to specify which data fetcher and parser to use. In the case of CSV migrations, the source plugin configuration changes depending on the presence of a headers row. Finding out which options are available might require some Drupal development knowledge. To make the process easier, in today’s article we are presenting a reference of available configuration options for migrate source plugins provided by Drupal core and some contributed modules.
For each migrate source plugin we will present: the module that provides it, the class that defines it, the class that the plugin extends, and any inherited options from the class hierarchy. For each plugin configuration option we will list its name, type, a description, and a note if it is optional.
Module: Migrate (Drupal Core)
Class: Drupal\migrate\Plugin\migrate\source\SourcePluginBase
Extends: Drupal\Core\Plugin\PluginBase
This abstract class is extended by most migrate source plugins. This means that the provided configuration keys apply to any source plugin extending it.
List of configuration keys:
The high_water_property and track_changes are mutually exclusive. They are both designed to conditionally import new or updated records from the source. Hence, only one can be configured per migration definition file.
Module: Migrate (Drupal Core)
Class: Drupal\migrate\Plugin\migrate\source\SqlBase
Extends: Drupal\migrate\Plugin\migrate\source\SourcePluginBase
Inherited configuration options: skip_count, cache_counts, cache_key, track_changes, high_water_property, and source_module.
This abstract class is extended by migrate source plugins whose data may be fetched via a database connection. This means that the provided configuration keys apply to any source plugin extending it.
In addition to the keys provided in the parent class chain, this abstract class provides the following configuration keys:
To explain how these configuration keys are used, consider the following database connections:
<?php $databases['default']['default'] = [ 'database' => 'drupal-8-or-9-database-name', 'username' => 'drupal-8-or-9-database-username', 'password' => 'drupal-8-or-9-database-password', 'host' => 'drupal-8-or-9-database-server', 'port' => '3306', 'namespace' => 'Drupal\\Core\\Database\\Driver\\mysql', 'driver' => 'mysql', ]; $databases['migrate']['default'] = [ 'database' => 'drupal-6-or-7-database-name', 'username' => 'drupal-6-or-7-database-username', 'password' => 'drupal-6-or-7-database-password', 'host' => 'drupal-6-or-7-database-server', 'port' => '3306', 'namespace' => 'Drupal\\Core\\Database\\Driver\\mysql', 'driver' => 'mysql', ];
This snippet can be added to settings.php or settings.local.php. The $databases array is a nested array of at least three levels. The first level defines the database keys: default and migrate in our example. The second level defines the database targets: default in both cases. The third level is an array with connection details for each key/target combination. This documentation page contains more information about database configuration.
Based on the specified configuration values, this is how the Migrate API determines which database connection to use:
Note that all values configuration keys are optional. If none is set, the plugin will default to use the connection specified under $databases['migrate']['default']. At least, set the key configuration even if the value is migrate. This would make it explicit which connection is being used.
Module: Migrate Drupal (Drupal Core)
Class: Drupal\migrate_drupal\Plugin\migrate\source\DrupalSqlBase
Extends: Drupal\migrate\Plugin\migrate\source\SqlBase
Inherited configuration options: skip_count, cache_counts, cache_key, track_changes, high_water_property, source_module, key, target, database_state_key, batch_size, and ignore_map.
This abstract class provides general purpose helper methods that are commonly needed when writing source plugins that use a Drupal database as a source. For example, check if the given module exists and read Drupal configuration variables. Check the linked class documentation for more available methods.
In addition to the keys provided in the parent class chain, this abstract class provides the following configuration key:
Warning: A plugin extending this abstract class might want to use this configuration key in the source definition to set module dependencies. If so, the expected keys might clash with other source constants used in the process pipeline. Arrays keys in PHP are case sensitive. Using uppercase in custom source constants might avoid this clash, but it is preferred to use a different name to avoid confusion.
This abstract class is extended by dozens of core classes that provide an upgrade path from Drupal 6 and 7. It is also used by the Commerce Migrate module to read product types, product display types, and shipping flat rates from a Commerce 1 database. The same module follows a similar approach to read data from an Ubercart database. The Paragraphs module also extends it to add and implement Configurable Plugin interface so it can import field collection types and paragraphs types from Drupal 7.
Module: Migrate Drupal (Drupal Core). Plugin ID: d8_config
Class: Drupal\migrate_drupal\Plugin\migrate\source\d8\Config
Extends: Drupal\migrate_drupal\Plugin\migrate\source\DrupalSqlBase
Inherited configuration options: skip_count, cache_counts, cache_key, track_changes, high_water_property, source_module, key, target, database_state_key, batch_size, ignore_map, and constants.
This plugin allows reading configuration values from a Drupal 8 site by reading its config table.
In addition to the keys provided in the parent class chain, this plugin does not define extra configuration keys. And example configuration for this plugin would be:
source: plugin: d8_config key: migrate skip_count: true
In this case we are setting the key property from SqlBase to use the migrate default database connection. The skip_count from SourcePluginBase indicates that there is no need to count how many records exist in the source database before executing migration operations like importing them.
This plugin is presented to show that Drupal core already offers a way to migrate data from Drupal 8. Remember that there are dozens of other plugins extending DrupalSqlBase. It would be impractical to list them all here. See this API page for a list of all of them.
Module: Migrate Source CSV. Plugin ID: csv
Class: Drupal\migrate_source_csv\Plugin\migrate\source\CSV
Extends: Drupal\migrate\Plugin\migrate\source\SourcePluginBase
Inherited configuration options: skip_count, cache_counts, cache_key, track_changes, high_water_property, and source_module.
This plugin allows reading data from a CSV file. We used this plugin in the CSV migration example of the 31 days of migration series.
In addition to the keys provided in the parent class chain, this plugin provides the following configuration keys:
Important: The configuration options changed significantly between the 8.x-3.x and 8.x-2.x branches. Refer to this change record for a reference of how to configure the plugin for the 8.x-2.x.
For reference, below is the source plugin configuration used in the CSV migration example:
source: plugin: csv path: modules/custom/ud_migrations/ud_migrations_csv_source/sources/udm_photos.csv ids: [photo_id] header_offset: null fields: - name: photo_id label: 'Photo ID' - name: photo_url label: 'Photo URL'
Module: Migrate Spreadsheet. Plugin ID: spreadsheet
Class: Drupal\migrate_spreadsheet\Plugin\migrate\source\Spreadsheet
Extends: Drupal\migrate\Plugin\migrate\source\SourcePluginBase
Inherited configuration options: skip_count, cache_counts, cache_key, track_changes, high_water_property, and source_module.
This plugin allows reading data from Microsoft Excel and LibreOffice Calc files. It requires the PhpOffice/PhpSpreadsheet library and many PHP extensions including ext-zip. Check this page for a full list of dependencies. We used this plugin in the spreadsheet migration examples of the 31 days of migration series.
In addition to the keys provided in the parent class chain, this plugin provides the following configuration keys:
Note that nowhere in the plugin configuration you specify the file type. The same setup applies for both Microsoft Excel and LibreOffice Calc files. The library will take care of detecting and validating the proper type.
For reference, below is the source plugin configuration used in the LibreOffice Calc migration example:
source: plugin: spreadsheet file: modules/custom/ud_migrations/ud_migrations_sheets_sources/sources/udm_book_paragraph.ods worksheet: 'UD Example Sheet' header_row: 1 origin: A2 columns: - book_id - book_title - 'Book author' row_index_column: 'Document Row Index' keys: book_id: type: string
Module: Migrate Plus
Class: Drupal\migrate_plus\Plugin\migrate\source\SourcePluginExtension
Extends: Drupal\migrate\Plugin\migrate\source\SourcePluginBase
Inherited configuration options: skip_count, cache_counts, cache_key, track_changes, high_water_property, and source_module.
This abstract class provides extra configuration keys. It is extended by the URL plugin (explained later) and by source plugins provided by other modules like Feeds Migrate.
In addition to the keys provided in the parent class chain, this abstract class provides the following configuration keys:
See the code snippet for the Url plugin in the next section for an example of how these configuration options are used.
Module: Migrate Plus. Plugin ID: url
Class: Drupal\migrate_plus\Plugin\migrate\source\Url
Extends: Drupal\migrate_plus\Plugin\migrate\source\SourcePluginExtension
Inherited configuration options: skip_count, cache_counts, cache_key, track_changes, high_water_property, source_module, fields, and ids.
This plugin allows reading data from URLs. Using data parser plugins it is possible to fetch data from JSON, XML, SOAP, and Google Sheets. Note that this source plugin uses other plugins provided by Migrate Plus that might require extra configuration keys in addition to the ones explicitly defined in the plugin class. Those will also be listed.
In addition to the keys provided in the parent class chain, this plugin provides the following configuration keys:
The data parser plugins provide the following configuration keys:
The HTTP data fetcher plugins provide the following configuration keys:
The basic and digest authentication plugins provide the following configuration keys:
The OAuth2 authentication plugin requires the sainsburys/guzzle-oauth2-plugin composer package to work. It provides the following configuration keys:
The client credentials grant type requires the following configuration keys:
For configuration keys required by other grant types, refer to the classes that implement them. Read this article on adding HTTP request headers and authentication parameters for example configurations.
There are many combinations possible to configure this plugin. In the 31 days of migration series there are many example configurations. For reference, below is the source plugin configuration used in the local JSON node migration example:
source: plugin: url data_fetcher_plugin: file data_parser_plugin: json urls: - modules/custom/ud_migrations/ud_migrations_json_source/sources/udm_data.json item_selector: /data/udm_people fields: - name: src_unique_id label: 'Unique ID' selector: unique_id - name: src_name label: 'Name' selector: name - name: src_photo_file label: 'Photo ID' selector: photo_file - name: src_book_ref label: 'Book paragraph ID' selector: book_ref ids: src_unique_id: type: integer
Module: Migrate (Drupal Core). Plugin ID: embedded_data
Class: Drupal\migrate\Plugin\migrate\source\EmbeddedDataSource
Extends: Drupal\migrate\Plugin\migrate\source\SourcePluginBase
Inherited configuration options: skip_count, cache_counts, cache_key, track_changes, high_water_property, and source_module.
This plugin allows the definition of data to be imported right inside the migration definition file. We used this plugin in many of the examples of the 31 days of migration series. It is also used in many core tests for the Migrate API itself.
In addition to the keys provided in the parent class chain, this abstract class provides the following configuration keys:
Many examples of 31 days of migration series use this plugin. You can get the example modules from this repository. For reference, below is the source plugin configuration used in the first migration example:
source: plugin: embedded_data data_rows: - unique_id: 1 creative_title: 'The versatility of Drupal fields' engaging_content: 'Fields are Drupal''s atomic data storage mechanism...' - unique_id: 2 creative_title: 'What is a view in Drupal? How do they work?' engaging_content: 'In Drupal, a view is a listing of information. It can a list of nodes, users, comments, taxonomy terms, files, etc...' ids: unique_id: type: integer
This plugin can also be used to create default content when the data is known in advance. We often present Drupal site building workshops. To save time, we use this plugin to create nodes which are later used when explaining how to create Views. Check this repository for an example of this. Note that it uses a different directory structure to store the migrations as explained in this blog post.
Module: Migrate Drupal (Drupal Core). Plugin ID: content_entity
Class: Drupal\migrate_drupal\Plugin\migrate\source\ContentEntity
Extends: Drupal\migrate\Plugin\migrate\source\SourcePluginBase
Inherited configuration options: skip_count, cache_counts, cache_key, track_changes, high_water_property, and source_module.
This plugin returns content entities from a Drupal 8 or 9 installation. It uses the Entity API to get the data to migrate. If the source entity type has custom field storage fields or computed fields, this class will need to be extended and the new class will need to load/calculate the values for those fields.
In addition to the keys provided in the parent class chain, this plugin provides the following configuration key:
For reference, this is how this plugin is configured to get all nodes of type article in their default language only:
source: plugin: content_entity:node bundle: article include_translations: false
Note: this plugin was brought into core in this issue copied from the Drupal 8 migration (source) module. The latter can be used if the source database does not use the default connection.
Module: Migrate Plus. Plugin ID: table
Class: Drupal\migrate_plus\Plugin\migrate\source\Table
Extends: Drupal\migrate\Plugin\migrate\source\SqlBase
Inherited configuration options: skip_count, cache_counts, cache_key, track_changes, high_water_property, source_module, key, target, database_state_key, batch_size, and ignore_map.
This plugin allows reading data from a single database table. It uses one of the database connections for the site as defined by the options. See this test for an example on how to use this plugin.
In addition to the keys provided in the parent class chain, this plugin provides the following configuration key:
Module: Migrate (Drupal Core). Plugin ID: empty
Class: Drupal\migrate\Plugin\migrate\source\EmptySource
Extends: Drupal\migrate\Plugin\migrate\source\SourcePluginBase
Inherited configuration options: skip_count, cache_counts, cache_key, track_changes, high_water_property, and source_module.
This plugin returns an empty row by default. It can be used as a placeholder to defer setting the source plugin to a deriver. An example of this can be seen in the migrations for Drupal 6 and Drupal 7 entity reference translations. In both cases, the source plugin will be determined by the EntityReferenceTranslationDeriver.
In addition to the keys provided in the parent class chain, this plugin does not define extra configuration keys. If the plugin is used with source constants, a single row containing the constant values will be returned. For example:
source: plugin: empty constants: entity_type: node field_name: body
The plugin will return a single row containing 'entity_type' and 'field_name' elements, with values of 'node' and 'body', respectively. This is not very useful. For the most part, the plugin is used to defer the definition to a deriver as mentioned before.
Module: Migrate Drupal (Drupal Core). Plugin ID: md_empty
Class: Drupal\migrate_drupal\Plugin\migrate\source\EmptySource
Extends: Drupal\migrate\Plugin\migrate\source\EmptySource
Inherited configuration options: skip_count, cache_counts, cache_key, track_changes, high_water_property, and source_module.
By default, this plugin returns an empty row with Drupal specific config dependencies. If the plugin is used with source constants, a single row containing the constant values will be returned. These can be seen in the user_picture_field.yml and d6_upload_field.yml migrations.
In addition to the keys provided in the parent class chain, this abstract class provides the following configuration keys:
In Drupal core itself there are more than 100 migrate source plugins, most of which come from the Migrate Drupal module. And many more are made available by contributed modules. It would be impractical to document them all here. To get a list by yourself, load the plugin.manager.migrate.source service and call its getFieldStorageDefinitions() method. This will return all migrate source plugins provided by the modules that are currently enabled on the site. This Drush command would get the list:
# List of migrate source plugin definitions. $ drush php:eval "print_r(\Drupal::service('plugin.manager.migrate.source')->getDefinitions());" # List of migrate source plugin ids. $ drush php:eval "print_r(array_keys(\Drupal::service('plugin.manager.migrate.source')->getDefinitions()));"
To find out which configuration options are available for any source plugin consider the following:
What did you learn in today’s article? Did you know that migrate source plugins can inherit configuration keys from their class hierarchy? Were you aware that there are so many source plugins? Other than the ones listed here, which source plugins have you used? Please share your answers in the comments. Also, we would be grateful if you shared this article with your friends and colleagues.
At Agaric, we perform a lot of Drupal upgrades. These very often involve transitioning away from older versions of PHP. Even when your hosting service provides multiple versions of PHP, you can still run into issues activating the appropriate one for each site: whether that's within the web server, at the command line, or via Drush (or other tools). In this blog post, we'll be providing remedies for all of these cases. Most content applies to any PHP application.
Drupal 8 migrations quickstart guide (half day training)
DrupalCon Nashville
Learning objectives:
This is an advanced course. You should be familiar with source, process, and destination plugins; how the process pipeline operates; and how to execute migrations from the command line via Drush. Understanding the migrations in this demo repo suffices to take this training. Note that the repo mentioned before is not the one we will be covering with the training. You can also have a look at this video for an overview of the Migrate API.
Cross-posted from opensource.com.
Since it is good practice to use Composer to manage a Drupal site's dependencies, use it to install the tools for BDD tests: Behat, Mink, and the Behat Drupal Extension. The Behat Drupal Extension lists Behat and Mink among its dependencies, so you can get all of the tools by installing the Behat Drupal Extension package:
composer require drupal/drupal-extension --dev
Mink allows you to write tests in a human-readable format. For example:
Given I am registered user,
When I visit the homepage,
Then I should see a personalized news feed
Because these tests are supposed to emulate user interaction, you can assume they will be executed within a web browser. That is where Mink comes into play. There are various browser emulators, such as Goutte and Selenium, and they all behave differently and have very different APIs. Mink allows you to write a test once and execute it in different browser emulators. In layman's terms, Mink allows you to control a browser programmatically to emulate a user's action.
Now that you have the tools installed, you should have a behat command available. If you run it:
./vendor/bin/behat
you should get an error, like:
FeatureContext context class not found and can not be used
Start by initializing Behat:
./vendor/bin/behat --init
This will create two folders and one file, which we will revisit later; for now, running behat without the extra parameters should not yield an error. Instead, you should see an output similar to this:
No scenarios
No steps
0m0.00s (7.70Mb)
Now you are ready to write your first test, for example, to verify that website visitors can leave a message using the site-wide contact form.
By default, Behat will look for files in the features folder that's created when the project is initialized. The file inside that folder should have the .feature extension. Let's tests the site-wide contact form. Create a file contact-form.feature in the features folder with the following content:
Feature: Contact form
In order to send a message to the site administrators
As a visitor
I should be able to use the site-wide contact form
Scenario: A visitor can use the site-wide contact form
Given I am at "contact/feedback"
When I fill in "name" with "John Doe"
And I fill in "mail" with "john@doe.com"
And I fill in "subject" with "Hello world"
And I fill in "message" with "Lorem Ipsum"
And I press "Send message"
Then I should see the text "Your message has been sent."
Behat tests are written in Gherkin, a human-readable format that follows the Context–Action–Outcome pattern. It consists of several special keywords that, when parsed, will execute commands to emulate a user's interaction with the website.
The sentences that start with the keywords Given, When, and Then indicate the Context, Action, and Outcome, respectively. They are called Steps and they should be written from the perspective of the user performing the action. Behat will read them and execute the corresponding Step Definitions. (More on this later.)
This example instructs the browser to visit a page under the "contact/feedback" link, fill in some field values, press a button, and check whether a message is present on the page to verify that the action worked. Run the test; your output should look similar to this:
1 scenario (1 undefined)
7 steps (7 undefined)
0m0.01s (8.01Mb)
>> default suite has undefined steps. Please choose the context to generate snippets:
[0] None
[1] FeatureContext
>
Type 0 at the prompt to select the None option. This verifies that Behat found the test and tried to execute it, but it is complaining about undefined steps. These are the Step Definitions, PHP code that will execute the tasks required to fulfill the step. You can check which steps definitions are available by running:
./vendor/bin/behat -dl
Currently there are no step definitions, so you shouldn't see any output. You could write your own, but for now, you can use some provided by the Mink extension and the Behat Drupal Extension. Create a behat.yml file at the same level as the Features folder—not inside it—with the following contents:
default:
suites:
default:
contexts:
- FeatureContext
- Drupal\DrupalExtension\Context\DrupalContext
- Drupal\DrupalExtension\Context\MinkContext
- Drupal\DrupalExtension\Context\MessageContext
- Drupal\DrupalExtension\Context\DrushContext
extensions:
Behat\MinkExtension:
goutte: ~
Steps definitions are provided through Contexts. When you initialized Behat, it created a FeatureContext without any step definitions. In the example above, we are updating the configuration file to include this empty context along with others provided by the Drupal Behat Extension. Running ./vendor/bin/behat -dl again produces a list of 120+ steps you can use; here is a trimmed version of the output:
default | Given I am an anonymous user
default | When I visit :path
default | When I click :link
default | Then I (should )see the text :text
Now you can perform lots of actions. Run the tests again with ./vendor/bin/behat. The test should fail with an error similar to:
Scenario: A visitor can use the site-wide contact form # features/contact-form.feature:8
And I am at "contact/feedback" # Drupal\DrupalExtension\Context\MinkContext::assertAtPath()
When I fill in "name" with "John Doe" # Drupal\DrupalExtension\Context\MinkContext::fillField()
And I fill in "mail" with "john@doe.com" # Drupal\DrupalExtension\Context\MinkContext::fillField()
And I fill in "subject" with "Hello world" # Drupal\DrupalExtension\Context\MinkContext::fillField()
Form field with id|name|label|value|placeholder "subject" not found. (Behat\Mink\Exception\ElementNotFoundException)
And I fill in "message" with "Lorem Ipsum" # Drupal\DrupalExtension\Context\MinkContext::fillField()
And I press "Send message" # Drupal\DrupalExtension\Context\MinkContext::pressButton()
Then I should see the text "Your message has been sent." # Drupal\DrupalExtension\Context\MinkContext::assertTextVisible()
--- Failed scenarios:
features/contact-form.feature:8 1 scenario (1 failed) 7 steps (3 passed, 1 failed, 3 skipped) 0m0.10s (12.84Mb)
The output shows that the first three steps—visiting the contact page and filling in the name and subject fields—worked. But the test fails when the user tries to enter the subject, then it skips the rest of the steps. These steps require you to use the name attribute of the HTML tag that renders the form field.
When I created the test, I purposely used the proper values for the name and address fields so they would pass. When in doubt, use your browser's developer tools to inspect the source code and find the proper values you should use. By doing this, I found I should use subject[0][value] for the subject and message[0][value] for the message. When I update my test to use those values and run it again, it should pass with flying colors and produce an output similar to:
1 scenario (1 passed)
7 steps (7 passed)
0m0.29s (12.88Mb)
Success! The test passes! In case you are wondering, I'm using the Goutte browser. It is a command line browser, and the driver to use it with Behat is installed as a dependency of the Behat Drupal Extension package.
As mentioned above, BDD tests should be written from the perspective of the user performing the action. Users don't think in terms of HTML name attributes. That is why writing tests using subject[0][value] and message[0][value] is both cryptic and not very user friendly. You can improve this by creating custom steps at features/bootstrap/FeatureContext.php, which was generated when Behat initialized.
Also, if you run the test several times, you will find that it starts failing. This is because Drupal, by default, imposes a limit of five submissions per hour. Each time you run the test, it's like a real user is performing the action. Once the limit is reached, you'll get an error on the Drupal interface. The test fails because the expected success message is missing.
This illustrates the importance of debugging your tests. There are some steps that can help with this, like Then print last drush output and Then I break. Better yet is using a real debugger, like Xdebug. You can also install other packages that provide more step definitions specifically for debugging purposes, like Behatch and Nuvole's extension,. For example, you can configure Behat to take a screenshot of the state of the browser when a test fails (if this capability is provided by the driver you're using).
Regarding drivers and browser emulators, Goutte doesn't support JavaScript. If a feature depends on JavaScript, you can test it by using the Selenium2Driver in combination with Geckodriver and Firefox. Every driver and browser has different features and capabilities. For example, the Goutte driver provides access to the response's HTTP status code, but the Selenium2Driver doesn't. (You can read more about drivers in Mink and Behat.) For Behat to pickup a javascript enabled driver/browser you need to annotate the scenario using the @javascript tag. Example:
Feature:
(feature description)
@javascript
Scenario: An editor can select the author of a node from an autocomplete field
(list of steps)
Another tag that is useful for Drupal sites is @api. This instructs the Behat Drupal Extension to use a driver that can perform operations specific to Drupal; for example, creating users and nodes for your tests. Although you could follow the registration process to create a user and assign roles, it is easier to simply use a step like Given I am logged in as a user with the "Authenticated user" role. For this to work, you need to specify whether you want to use the Drupal or Drush driver. Make sure to update your behat.yml file accordingly. For example, to use the Drupal driver:
default:
extensions:
Drupal\DrupalExtension:
blackbox: ~
api_driver: drupal
drupal:
drupal_root: ./relative/path/to/drupal
I hope this introduction to BDD testing in Drupal serves you well. If you have questions, feel free to add a comment below, send me an email at mauricio@agaric.com (or through the Agaric contact form) or a tweet at @dinarcon.
Congratulations to the new leadership committee for May First People Link!
Agaric is proud and excited to be a member of this organization, building fantastic shared internet resources out of open source free software, immense volunteer time, and member's dues.
Principles developed in a time-limited, collaborative-technology-driven process:
1. Bring our political processes and technology -- including resources, communication and support systems -- to a point at which all members of May First/People Link can have confidence in the reliability and control of their data. It is important to recognize varying levels of skill and interest in understanding technology.
2. Define what engaged membership in MF/PL actually means. This definition must come from the membership itself, prioritizing grassroots groups of people who face racial, gender, economic or other form of oppression, and who are currently MF/PL members in name but might not feel themselves to be members in practice. The definition should also draw from existing models of membership in our movements. This includes identifying rights and responsibilities of membership as well as opportunities for members to become more actively involved. The responsibilities and expectations of members to other members and to MFPL as a whole will be included.
[This principle, in particular, had every group's endorsement before a last-minute change.]
3. Development of governance models for Mayfirst/PeopleLink as a membership organization, drawing from the experience and expertise of members and member organizations.
4. Develop and build direct contact relationships with progressive organizations locally and internationally, and raise our visibility in public discussions, debates, and decisions. This is about (1) defining use of and choice of technology as political within our movements, and (2) contributing to debates about the internet including privacy, corporate and government surveillance, data sovereignty, censorship, and access.
5. May First/People Link will fully claim its role in building an international, alter-globalization movement by continuing our involvement in such movements as the Social Forum, Climate Change and the Palestinian rights. This would/could engage our membership in several ways: organizing delegations of members to summit events and relying on the political experience of our membership to guide our region- and sector-specific work.
6. Continue the existing collaboration with members the Praxis Project, the Progressive Technology Project, and others, to train activists of color to become more capable technologists, and expand that work to include women, and people from less traditional backgrounds. Also, continue giving support to and building on current training initiatives, while working towards the future to expand training programs in the Global South with partners and member organizations as opportunities are identified.
Log in with a service provider account
Share information about your organization and post events and services.
I had a great time at BioRAFT Drupal Nights, on January 16th, 2014. Originally Chris Wells (cwells) was scheduled to speak, unfortunately he was down with the flu. Michelle Lauer (Miche) put an awesome presentation together on really short notice. With help from Diliny Corlesquet (Dcor) there was plenty to absorb intellectually along with the delicious Middle Eastern food. I love spontaneity, so it was fun to hop over to BioRAFT in Cambridge, MA, to be a member of a panel of Senior Drupal Developers.
I joined Seth Cohn, Patrick Corbett, Michelle Lauer and Erik Peterson to present some viable solutions to performance issues in Drupal.
Several slides covered the topic - Drupal High Performance and a lively discussion ensued. The panel members delved into some of the top issues with Drupal performance under several different conditions. The discussion was based on the book written by Jeff Shelton, Naryan Newton, and Nathaniel Catchpole - High Performance Drupal - O'Reilly - http://shop.oreilly.com/product/0636920012269.do .
The room at BioRAFT was comfortably full with just the right amount of people to have an informal Q and A, and to directly address some concerns that developers were actually working on. Lurking in the back I spotted some of the top developers in the Drupal community (scor, mlncn, kay_v just to name a few) that were not on the panel, but they did had a lot of experience to speak from during Q and A.
The video posted at the bottom is packt with great information and techniques - Miche even shared some code snippets that will get you going if you seek custom performance enhancements.
One of the discussions was around tools that can be used to gauge performance or tweak it.
I also found an online test site: http://drupalsitereview.com
We also talked a lot about caching and the several options that exist to deal with cache on a Drupal site.
There are options to cache content for anonymous users and there are solutions and modules for more robust sites with high traffic.
PHP 5.5 offers a good caching solution with integrated opcode caching. They are a performance enhancement and extension for PHP. Some sites have shown a 3x performance boost by using opcode caching. Opcode caching presents no side-effects beyond extra memory usage and should always be used in production environments. Below are a couple of the suggestions we discussed that are listed in the Modules listed on Drupal.org:
The panel agreed that there are many things to consider when seeking to improve the sites performance:
A few more are discussed in the video below...
Sending out Be Wells to cwells! We look forward to a future presentation, and we are also pleased that on such short notice it did not seem too hard to gather a panel of several senior Drupal developers to discuss High Performance Drupal. See the whole discussion on YouTube
In the previous entry, we wrote our first Drupal migration. In that example, we copied verbatim values from the source to the destination. More often than not, the data needs to be transformed in some way or another to match the format expected by the destination or to meet business requirements. Today we will learn more about process plugins and how they work as part of the Drupal migration pipeline.
The Migrate API offers a lot of syntactic sugar to make it easier to write migration definition files. Field mappings in the process section are an example of this. Each of them requires a process plugin to be defined. If none is manually set, then the get
plugin is assumed. The following two code snippets are equivalent in functionality.
process:
title: creative_title
process:
title:
plugin: get
source: creative_title
The get
process plugin simply copies a value from the source to the destination without making any changes. Because this is a common operation, get
is considered the default. There are many process plugins provided by Drupal core and contributed modules. Their configuration can be generalized as follows:
process:
destination_field:
plugin: plugin_name
config_1: value_1
config_2: value_2
config_3: value_3
The process plugin is configured within an extra level of indentation under the destination field. The plugin
key is required and determines which plugin to use. Then, a list of configuration options follows. Refer to the documentation of each plugin to know what options are available. Some configuration options will be required while others will be optional. For example, the concat
plugin requires a source
, but the delimiter
is optional. An example of its use appears later in this entry.
Sometimes, the destination requires a property or field to be set, but that information is not present in the source. Imagine you are migrating nodes. As we have mentioned, it is recommended to write one migration file per content type. If you know in advance that for a particular migration you will always create nodes of type Basic page
, then it would be redundant to have a column in the source with the same value for every row. The data might not be needed. Or it might not exist. In any case, the default_value
plugin can be used to provide a value when the data is not available in the source.
source: ...
process:
type:
plugin: default_value
default_value: page
destination:
plugin: 'entity:node'
The above example sets the type
property for all nodes in this migration to page
, which is the machine name of the Basic page
content type. Do not confuse the name of the plugin with the name of its configuration property as they happen to be the same: default_value
. Also note that because a (content) type
is manually set in the process section, the default_bundle
key in the destination section is no longer required. You can see the latter being used in the example of writing your Drupal migration blog post.
Consider the following migration request: you have a source listing people with first and last name in separate columns. Both are capitalized. The two values need to be put together (concatenated) and used as the title of nodes of type Basic page
. The character casing needs to be changed so that only the first letter of each word is capitalized. If there is a need to display them in all caps, CSS can be used for presentation. For example: FELIX DELATTRE
would be transformed to Felix Delattre
.
Tip: Question business requirements when they might produce undesired results. For instance, if you were to implement this feature as requested DAMIEN MCKENNA
would be transformed to Damien Mckenna
. That is not the correct capitalization for the last name McKenna
. If automatic transformation is not possible or feasible for all variations of the source data, take notes and perform manual updates after the initial migration. Evaluate as many use cases as possible and bring them to the client’s attention.
To implement this feature, let’s create a new module ud_migrations_process_intro
, create a migrations
folder, and write a migration definition file called udm_process_intro.yml
inside it. Follow the instructions in this entry to find the proper location and folder structure or download the sample module from https://github.com/dinarcon/ud_migrations It is the one named UD Process Plugins Introduction
and machine name udm_process_intro
. For this example, we assume a Drupal installation using the standard
installation profile which comes with the Basic Page
content type. Let’s see how to handle the concatenation of first an last name.
id: udm_process_intro
label: 'UD Process Plugins Introduction'
source:
plugin: embedded_data
data_rows:
-
unique_id: 1
first_name: 'FELIX'
last_name: 'DELATTRE'
-
unique_id: 2
first_name: 'BENJAMIN'
last_name: 'MELANÇON'
-
unique_id: 3
first_name: 'STEFAN'
last_name: 'FREUDENBERG'
ids:
unique_id:
type: integer
process:
type:
plugin: default_value
default_value: page
title:
plugin: concat
source:
- first_name
- last_name
delimiter: ' '
destination:
plugin: 'entity:node'
The concat
plugin can be used to glue together an arbitrary number of strings. Its source
property contains an array of all the values that you want put together. The delimiter
is an optional parameter that defines a string to add between the elements as they are concatenated. If not set, there will be no separation between the elements in the concatenated result. This plugin has an important limitation. You cannot use strings literals as part of what you want to concatenate. For example, joining the string Hello
with the value of the first_name
column. All the values to concatenate need to be columns in the source or fields already available in the process pipeline. We will talk about the latter in a future blog post.
To execute the above migration, you need to enable the ud_migrations_process_intro
module. Assuming you have Migrate Run installed, open a terminal, switch directories to your Drupal docroot, and execute the following command: drush migrate:import udm_process_intro
Refer to this entry if the migration fails. If it works, you will see three basic pages whose title contains the names of some of my Drupal mentors. #DrupalThanks
Good progress so far, but the feature has not been fully implemented. You still need to change the capitalization so that only the first letter of each word in the resulting title is uppercase. Thankfully, the Migrate API allows chaining of process plugins. This works similarly to unix pipelines in that the output of one process plugin becomes the input of the next one in the chain. When the last plugin in the chain completes its transformation, the return value is assigned to the destination field. Let’s see this in action:
id: udm_process_intro
label: 'UD Process Plugins Introduction'
source: ...
process:
type: ...
title:
-
plugin: concat
source:
- first_name
- last_name
delimiter: ' '
-
plugin: callback
callable: mb_strtolower
-
plugin: callback
callable: ucwords
destination: ...
The callback
process plugin pass a value to a PHP function and returns its result. The function to call is specified in the callable
configuration option. Note that this plugin expects a source
option containing a column from the source or value of the process pipeline. That value is sent as the first argument to the function. Because we are using the callback
plugin as part of a chain, the source is assumed to be the last output of the previous plugin. Hence, there is no need to define a source
. So, we concatenate the columns, make them all lowercase, and then capitalize each word.
Relying on direct PHP function calls should be a last resort. Better alternatives include writing your own process plugins which encapsulates your business logic separate of the migration definition. The callback
plugin comes with its own limitation. For example, you cannot pass extra parameters to the callable
function. It will receive the specified value as its first argument and nothing else. In the above example, we could combine the calls to mb_strtolower() and ucwords()
into a single call to mb_convert_case($source, MB_CASE_TITLE) if passing extra parameters were allowed.
Tip: You should have a good understanding of your source and destination formats. In this example, one of the values to want to transform is MELANÇON
. Because of the cedilla (ç) using strtolower() is not adequate in this case since it would leave that character uppercase (melanÇon
). Multibyte string functions (mb_*
) are required for proper transformation. ucwords()
is not one of them and would present similar issues if the first letter of the words are special characters. Attention should be given to the character encoding of the tables in your destination database.
Technical note: mb_strtolower
is a function provided by the mbstring
PHP extension. It does not come enabled by default or you might not have it installed altogether. In those cases, the function would not be available when Drupal tries to call it. The following error is produced when trying to call a function that is not available: The "callable" must be a valid function or method
. For Drupal and this particular function that error would never be triggered, even if the extension is missing. That is because Drupal core depends on some Symfony packages which in turn depend on the symfony/polyfill-mbstring
package. The latter provides a polyfill) for mb_*
functions that has been leveraged since version 8.6.x of Drupal.
What did you learn in today’s blog post? Did you know that syntactic sugar allows you to write shorter plugin definitions? Were you aware of process plugin chaining to perform multiple transformations over the same data? Had you considered character encoding on the source and destination when planning your migrations? Are you making your best effort to avoid the callback
process plugin? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with your colleagues.
Next: Migrating data into Drupal subfields
This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether is the migration series or other topics.