Skip to main content

Blog

This same functionality is theoretically possible for other list fields, but In Other Words does not currently support summarizing text and numeric lists. We have opened a feature request to do so, but do not currently have the need ourselves, so we welcome your involvement. Feel free to open issues about things you think would make the module better, too!

A final note. We went through considerable effort to allow In Other Words' conjoining symbols and words, as well as surrounding text, to appear immediately before or after the items in a list with no whitespace. (Note: This does not work when Twig debugging is enabled.) This means you can have information presented as a sentence that ends with a period, "Open Tuesday–Friday." It also means that if you want spacing you need to ensure you add it as part of your join symbol or separator, your final join word, and your before and after text. This is a common gotcha in configuring In Other Words field formatters that you, perhaps, can now avoid!

Migration experts

We are acknowledged experts in Drupal migrations, leading paid trainings at Drupal conferences and beyond.  Mauricio Dinarte's 31 Days of Drupal Migrations blog series gives evidence for our passion for migrations, and we hope to continue living out that passion helping you migrate your site to modern Drupal!

A full range of services

Beyond migrating your data, we offer services for site-building, theming, design, and strategy analysis—everything required to ensure a successful migration and future for your new Drupal site.

Get in touch

E-mail us at ask@agaric.coop, call us at +1 508 283 3557, or use this form below, and one of us worker-owners at Agaric will get back to you.

This blog post is about Drutopia right now, not about all the ways we can improve it.

Daily use

Building your website on Drutopia has the advantage of allowing multiple people to create content of different defined types that can be clearly related to one another, listed and presented in different ways, and filtered. This is the super-power of structured content.

Drutopia's main disadvantages are that free-form changing how it looks is slow and takes specialized knowledge (unlike Wix or Squarespace which are primarily page builders), and it does not have many themes to choose from (which WordPress has).

The defined types of content available in Drutopia are:

  • Article for time-sensitive content like news or press releases, where the perspective is representative of the whole organization.
  • Blog for personal or journal-like posts, where authorship is more important.
  • Person for featuring people on your site such as staff, volunteers, or contributors.
  • Event Events have a date and time, an event type, and all the usual Drutopia fields (title, image, description, topic, and tags).
  • Resource can be text and images like any content; an attached file, such as a PDF; or a link, such as a website URL; or an embedded video.
  • Campaign: A campaign includes background information as well as ability to list demands and updates.
  • Action: An action is a specific, single action a user can take.

All of these types of content can be connected by Topic, for site-wide curated categorization and by Tags for site-wide free-form categorization. Most content types can be further distinguished by type (article type, person role or type, event type etc).

Content is composed of sections, which can be the usual WYSIWYG text editor that allows insertion of images and other media, or more specialized sections for image, video, or file. This capability to mix different kinds of sections on pages is under-utilized at present in Drutopia but can be extended to be able to embed forms, including donation forms, or listings of other content.

Additional content types, which do not have the benefits of automatic listing pages with faceted search because they are meant for one-off or unique content, are:

  • Basic page for static content such as an ‘About us’ page or the privacy policy.
  • Landing page for custom pages, including potentially replacing the home page. Landing pages do not currently have a meaningful distinction from basic pages and are deprecated, but may be brought back with a special full-page free-form editor such as the Gutenberg editor developed by WordPress.

Long-term

A lot of the long-term advantage is having structured content you can make use of in ever-evolving ways, rather than having an undifferentiated mass of pages that can only be sorted through slowly and with difficulty.

But here we are taking a step back and looking at the platform more generally.

Drutopia is open source free software

In summary

Weebly, Wix, Squarespace, or inferior options bundled with other services such as Mailchimp Website Builder and GoDaddy Website Builder, do not allow creating content of carefully defined types, with relationships and connections between content. There are also usually limitations or extra costs associated with having multiple user accounts, or multiple people logged in at the same time. They let you change how the whole site and different pages look pretty easily. Moving your content to different hosting but keeping the software that runs the site is impossible, and switching to another platform and keeping your content is difficult.

WordPress allows some structured content and some customization of the look of the whole site pretty easily. WordPress can be hosted in different places (beware proprietary plugins though). WordPress content exports well if you want to change to a different software platform. Drutopia can import WordPress content.

Drutopia comes with a set of useful kinds of content already defined, complete with listing pages that can be filtered by cross-site topics and within-section types. Drutopia has limited visual customization currently available without knowing HTML, CSS, how to make templates, and how to work with a local development environment. Drutopia content exports well if you want to change to a different software platform, although to get the full benefit the other platform will need Drupal's capability to have structured content with rich relationships among content.

Worker owned cooperatives are businesses owned and controlled by the people who work in them. This is a collection of resources that will help you navigate the world of cooperatives and opportunities.
This list will continue to grow and connect with cooperative networks around the globe.

Boston area:

National/international:

Find worker cooperatives:

Legal resources

News articles about cooperatives:

Presentations and speakers:

Cooperative Resources:

Some great events happened in 2014 based around building cooperatives and collectives. These events should have videos online of the speakers and events: California Cooperative Conference and Chicago Freedom Summer

Whimsical simple silhouette of adult figure with magnifying glass followed by a child figure.

Find It

Program Locator and
Event Discovery platform

Ksnip screenshot tool.

 

 

I recently switched from Mac OS to Elementary, a Linux distribution focused on ease of use and privacy. As both a user experience designer and free software supporter, I am taking screenshots and annotating them all the time. After trying out several different tools, the one I enjoy by far is Ksnip.

Installation

Install ksnip with your preferred package manager. In my case I installed it via apt

sudo apt-get install ksnip

Configuration

Ksnip comes with quite a few configuration options, including:

  • Location to save screenshots to
  • Default screenshot file name
  • Image grabber behavior
  • Cursor color and thickness
  • Text font

You can also integrate it with your Imgur account.

Configuration settings of Ksnip.

Usage

My favorite part of Ksnip is that it has all the annotation tools I need (plus one I hadn't thought of!).

You can annotate with:

  • pen
  • marker
  • rectangles
  • ellipses
  • text

You can also blur areas to remove sensitive information.

And my new favorite tool, numbered dots for steps on an interface.

KSnip Features List - https://github.com/DamirPorobic/ksnip#features

About the Creator

I'm enjoying Ksnip so much that I reached out to the creator, Damir Porobic, to learn more about the project.

I asked what inspired him to create Ksnip and here's what he said,

"I switched from Windows to Linux a few years ago and missed the Windows Snipping Tool that I was used to on Windows. All other screenshot tools at that time were either huge (a lot of buttons and complex features) or lacked key features like annotations so I decided to build a simple Snipping Tool Clone but with time it got more and more feature so here we are."

This is exactly what I found as I was evaluating screenshot tools. It's great that he took the time to build a solution himself and freely share it for others to benefit from.

As for the future of Ksnip, Damir would like to add Global Shortcuts (at least for Windows), tabs for new screenshots, and allow the application to run in the background. There is also a growing list of feature requests on GitHub.  

Ways to Help

The biggest need is with development. Damir and his wife are expecting a baby soon so he won't have as much time to devote to the project. He is available to review and accept pull requests though.

Also, the project could benefit from additional installation options via Snap, Flatpak and installers for MacOS and a Setup for Windows.

Lastly, use the project, rate it and review it on AlternativeTo.net and other tech comparison platforms and if you can spare a few bucks, donations are always appreciated.

 

BigBlueButton.org

Do you need a safer way to communicate and to host events?  Agaric offers Video chat hosting with BigBlueButton, free software. Host events, meetings and conferences in an easy to use, professional but fun, application. 

We value learning new things and helping one another, so every Thursday at 3pm Eastern Time we take an hour to share things we are working on. We have deep conversations on the ways we can work together We use screen-sharing to show projects we are involved in and sometimes we doodle on the white board as we talk. 

Show and Tell Collaborative Doodle

Everyone is welcome to join the chat or just listen. The informal atmosphere supports us getting to know each other better and form stronger relationships. Anyone can suggest a topic for discussion or give a presentation or ask for input and help on a project. We use a poll to determine if we want to record the session.   Agaric hosts these show and tells publicly because we realize some of us work alone or in organizations that do not encourage skill-sharing, or may just be interested in broadening their knowledge or sharing some code.  So we invite you—our partners, students, colleagues, friends—to take part in watching or giving short presentations.

Direct link to the Show and Tell chatroom

Get on the Show and Tell Mailing List   to receive invitations each week with the upcoming topics. 

Do you want to host a Show and Tell discussion or presentation? Here is the email template to send  a notice to the list at: showandtell@lists.mayfirst.org  If you are already signed up on the email list, you should be able to send a notice to the group! Feel free to choose an alternative time if Thursdays at 3PM ET does not work well for you - experiment!

We shall see you soon!

 

Un selfie de Micky Metts en un restaurante mexicano colorido e informal con Martin Owens, Chris Thompson, Mauricio Dinarte y otros por encima del hombro.

Mostrar y Contar

Comparte lo que has aprendido. Vea lo que otros están haciendo.

On a cold December night we met at the Industry Lab to celebrate the Worc'n group and our membership in the cooperatives we belong to. The crowd was very energetic and several small groups were conversing on different topics throughout the evening. A holiday party is a time to see old friends and make new friends.

A cooperative is a structure to make new connections and share ideas. What better mix for a party. 2014 ended on a high note as the cooperative movement gains ground in Cambridge, MA. Worc'n (the Worker-Owned and Run Cooperative Network of Greater Boston) has been around for years and has an impressive number of cooperatives and owner-workers among their members. This party was sort of a merging of groups in the sense that Worc'n has been around for a long time, growing a membership and creating value.

People at the the Worker-Owned and Run Cooperative Network of Greater Boston meetup.

The Boston/Cambridge Worker-ownership Meetup just started a few months ago, but through the network of local cooperatives, the word spread quickly and people have been expressing their happiness that there are now some meetngs to attend and new people to network with. The party was a Pot Luck and everyone brought something to share. There were homemade cookies, cakes and pie. We also were fortunate to have Monica Leitner-Laserna of La Sanghita Cafe attend the party with some delicious vegan foods from her newly formed cooperative restaurant in East Boston.

La Sanghita Cafe

La Sanghita Cafe is Boston's only cooperative restaurant and they have a great vegan menu. Located just outside Maverick Square in East Boston, the restaurant is open daily for lunch and Wednesday through Saturday they offer a dinner menu. The menu is filled with sweet and savory items and the space is so comfortable and cozy. We look forward to having the coop members as guests at our monthly cooperative worker/owner meetup, to share the details on how it works to build a cooperative restaurant. Good food, good conversation and many engaging in discussion on topics involving cooperation, a wonderful evening was had by all. We should do this more often.

Today we are going to learn how to migrate users into Drupal. The example code will be explained in two blog posts. In this one, we cover the migration of email, timezone, username, password, and status. In the next one, we will cover creation date, roles, and profile pictures. Several techniques will be implemented to ensure that the migrated data is valid. For example, making sure that usernames are not duplicated.

Although the example is standalone, we will build on many of the concepts that had already been covered in the series. For instance, a file migration is included to import images used as profile pictures. This topic has been explained in detail in a previous post, and the example code is pretty similar. Therefore, no explanation is provided about the file migration to keep the focus on the user migration. Feel free to read other posts in the series if you need a refresher.

Example field mapping for user migration

Getting the code

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD users whose machine name is ud_migrations_users. The two migrations to execute are udm_user_pictures and udm_users. Notice that both migrations belong to the same module. Refer to this article to learn where the module should be placed.

The example assumes Drupal was installed using the standard installation profile. Particularly, we depend on a Picture (user_picture) image field attached to the user entity. The word in parenthesis represents the machine name of the image field.

The explanation below is only for the user migration. It depends on a file migration to get the profile pictures. One motivation to have two migrations is for the images to be deleted if the file migration is rolled back. Note that other techniques exist for migrating images without having to create a separate migration. We have covered two of them in the articles about subfields and constants and pseudofields.

Understanding the source

It is very important to understand the format of your source data. This will guide the transformation process required to produce the expected destination format. For this example, it is assumed that the legacy system from which users are being imported did not have unique usernames. Emails were used to uniquely identify users, but that is not desired in the new Drupal site. Instead, a username will be created from a public_name source column. Special measures will be taken to prevent duplication as Drupal usernames must be unique. Two more things to consider. First, source passwords are provided in plain text (never do this!). Second, some elements might be missing in the source like roles and profile picture. The following snippet shows a sample record for the source section:

source:
  plugin: embedded_data
  data_rows:
    - legacy_id: 101
      public_name: 'Michele'
      user_email: 'micky@example.com'
      timezone: 'America/New_York'
      user_password: 'totally insecure password 1'
      user_status: 'active'
      member_since: 'January 1, 2011'
      user_roles: 'forum moderator, forum admin'
      user_photo: 'P01'
  ids:
    legacy_id:
      type: integer

Configuring the destination and dependencies

The destination section specifies that user is the target entity. When that is the case, you can set an optional md5_passwords configuration. If it is set to true, the system will take an MD5 hashed password and convert it to the encryption algorithm that Drupal uses. For more information password migrations refer to these articles for basic and advanced use cases. To migrate the profile pictures, a separate migration is created. The dependency of user on file is added explicitly. Refer to these articles more information on migrating images and files and setting dependencies. The following code snippet shows how the destination and dependencies are set:

destination:
  plugin: 'entity:user'
  md5_passwords: true
migration_dependencies:
  required:
    - udm_user_pictures
  optional: []

Processing the fields

The interesting part of a user migration is the field mapping. The specific transformation will depend on your source, but some arguably complex cases will be addressed in the example. Let’s start with the basics: verbatim copies from source to destination. The following snippet shows three mappings:

mail: user_email
init: user_email
timezone: user_timezone

The mail, init, and timezone entity properties are copied directly from the source. Both mail and init are email addresses. The difference is that mail stores the current email, while init stores the one used when the account was first created. The former might change if the user updates its profile, while the latter will never change. The timezone needs to be a string taken from a specific set of values. Refer to this page for a list of supported timezones.

name:
  - plugin: machine_name
    source: public_name
  - plugin: make_unique_entity_field
    entity_type: user
    field: name
    postfix: _

The name, entity property stores the username. This has to be unique in the system. If the source data contained a unique value for each record, it could be used to set the username. None of the unique source columns (eg., legacy_id) is suitable to be used as username. Therefore, extra processing is needed. The machine_name plugin converts the public_name source column into transliterated string with some restrictions: any character that is not a number or letter will be converted to an underscore. The transformed value is sent to the make_unique_entity_field. This plugin makes sure its input value is not repeated in the whole system for a particular entity field. In this example, the username will be unique. The plugin is configured indicating which entity type and field (property) you want to check. If an equal value already exists, a new one is created appending what you define as postfix plus a number. In this example, there are two records with public_name set to Benjamin. Eventually, the usernames produced by running the process plugins chain will be: benjamin and benjamin_1.

process:
  pass:
    plugin: callback
    callable: md5
    source: user_password
destination:
  plugin: 'entity:user'
  md5_passwords: true

The pass, entity property stores the user’s password. In this example, the source provides the passwords in plain text. Needless to say, that is a terrible idea. But let’s work with it for now. Drupal uses portable PHP password hashes implemented by PhpassHashedPassword. Understanding the details of how Drupal converts one algorithm to another will be left as an exercise for the curious reader. In this example, we are going to take advantage of a feature provided by the migrate API to automatically convert MD5 hashes to the algorithm used by Drupal. The callback plugin is configured to use the md5 PHP function to convert the plain text password into a hashed version. The last part of the puzzle is set, in the process section, the md5_passwords configuration to true. This will take care of converting the already md5-hashed password to the value expected by Drupal.

Note: MD5-hash passwords are insecure. In the example, the password is encrypted with MD5 as an intermediate step only. Drupal uses other algorithms to store passwords securely.

status:
  plugin: static_map
  source: user_status
  map:
    inactive: 0
    active: 1

The status, entity property stores whether a user is active or blocked from the system. The source user_status values are strings, but Drupal stores this data as a boolean. A value of zero (0) indicates that the user is blocked while a value of one (1) indicates that it is active. The static_map plugin is used to manually map the values from source to destination. This plugin expects a map configuration containing an array of key-value mappings. The value from the source is on the left. The value expected by Drupal is on the right.

Technical note: Booleans are true or false values. Even though Drupal treats the status property as a boolean, it is internally stored as a tiny int in the database. That is why the numbers zero or one are used in the example. For this particular case, using a number or a boolean value on the right side of the mapping produces the same result.

In the next blog post, we will continue with the user migration. Particularly, we will explain how to migrate the user creation time, roles, and profile pictures.

What did you learn in today’s blog post? Have you migrated user passwords before, either in plain text or hashed? Did you know how to prevent duplicates for values that need to be unique in the system? Were you aware of the plugin that allows you to manually map values from source to destination? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

Next: Migrating users into Drupal - Part 2

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.

What does privacy and data trust mean to you, in your daily work and life? Does it mean that your door to your house is locked or that your email is encrypted, or both? Do you use cloud services  or social media for business or for personal lifestyle news and updates? No matter how you use social media and engage in a conversation, you are tracked and sorted into buckets. What does that really mean?

 

Background

For more than a decade, Agaric has had a special expertise in content migrations. The first substantial treatment of Drupal content migration in a book came with Upgrading a Drupal Site from 6 to 7 by Benjamin Melançon and Stefan Freudenberg with a Data Migration overview by Mike Ryan in The Definitive Guide to Drupal 7, a 35-author tome of Drupal knowledge from experts led by Agaric co-founder Ben Melançon.

More recently, Agaric worker-owner Mauricio Dinarte wrote an epic 31 blog post tutorials about migrating into Drupal in one month and leads migration trainings at Drupal conferences (DrupalCon), camps, and online, including Drupal 8/9 content migrations and Upgrading to Drupal 8/9 using the Migrate API.

How can we help?

Please share where you want to go with your site and a way to contact you!

Yes it's true, for the past few months we've been hard at work with a lot of other co-authors on The Definitive Guide to Drupal 7.

The Definitive Guide to Drupal 7 accelerates people along the Drupal learning curve by covering all aspects of building web sites with Drupal: architecture and configuration; module development; front end development; running projects sustainably; participating in the community; and contributing to Drupal's code and documentation.

Check out the website today! http://definitivedrupal.org/

A round red capped mushroom with white spots.A lot of people have asked us what 'agaric' means exactly, and why we chose to use it. Wikipedia describes agaric as a “type of fungal fruiting body characterized by...” blah blah blah (read the wiki)

Basically, a mushroom.

Why a mushroom you ask? Well, mushrooms are the 'fruiting bodies' of larger, sometimes vastly larger, mycelial networks. What's a mycelial network? In short, a network of mycelium, which are “the vegetative part of a fungus, consisting of a mass of branching, thread-like hyphae“ (read the wiki --> http://en.wikipedia.org/wiki/Mycelium)

Really, just read the wiki, it's cool, here's an excerpt:

A mycelium may be minute, forming a colony that is too small to see, or it may be extensive. "Is this the largest organism in the world? This 2,400-acre site in eastern Oregon had a contiguous growth of mycelium before logging roads cut through it. Estimated at 1,665 football fields in size and 2,200 years old, this one fungus has killed the forest above it several times over, and in so doing has built deeper soil layers that allow the growth of ever-larger stands of trees. Mushroom-forming forest fungi are unique in that their mycelial mats can achieve such massive proportions."

So there are these vast networks of interconnected life and all we really know of their existence without really close inspection is the fruiting bodies, the mushrooms or agarics. These networks are vital to ecosystems, they play an important role in the circle of nature and life, and in almost every case their presence can be proven to be beneficial in the long run.

We all know that the internet is a huge network of computers which covers a lot of the planet. The internet is also pretty much vital to our way of life these days as well, with emails and it being the information age and such... Here at Agaric Design, we see the phenomenon of the internet as an 'organism' of sorts, just like a mycelial network but on a planetary scale. The 'fruits' that this network produces are the websites out there that enhance our daily lives in some way. The ones that inform, connect, and produce positive change in the world.

With this vision in mind, we crack open our laptops everyday and do what we do, constantly learning more about this interweb thingy. We think that there's more to it than miles of wires and megawatts of pulsating electricity, it is more than just a sum of its parts. We think that the real potential of the internet still remains untapped...

We think we can help it fruit.

Talk about web components has been going for quite some time now: the term was coined in 2011. In this blog post, I will discuss the basics of what they are and what they do, and introduce the primary technological foundations for them. In future articles, I'll dig into the specifics and sample code - as they are currently supported or else poly-filled for today.

Web Components are a group of related standards promising to bring component-based extensibility to HTML documents. The great promise is that once you or someone else builds some component that provides a particular feature, it can easily be included and reused in other projects. For example, if you've created, say, a "treeview" component that displays data in a tree form, you can wrap up the definition of this in a component. Then, it would be possible for anyone to include the component easily wherever they might want to use a treeview. We'll ultimately get to defining our own components in a future article, but for now, let's look at a special sort of "built-in" web component.

Perhaps the most common HTML element that comes up when discussing web components is: the video tag. This is because the video tag offers a simple and very clear browser-native example of precisely what a web component looks like from the point of view of a consumer of a particular of a web component. If we were to include a video tag in a page, we don't end up with a naked auto-paying video display (by default). Instead, we end up with a nice little video player, complete with start/stop/seek and volume controls offered up by our browser:

 

A video control rendered by Chromium.
A video control as rendered by Chromium.

 

In our browser (which again, in this case is Chromium), we can select the developer tools option under the General section / Elements titled "Show user agent shadow DOM." Given this option is set, we are able to see how Chromium builds upon the video tag when it is rendered:

Screenshot of code for video component.

As you can see, beneath the video element comes one of the core pieces of the Web Components technologies: the shadow-root. Chromium has treated the video tag as a "shadow host" which is a containing element for a "shadow root", off of which hangs the implementation of the video element's controls. This shadow host/shadow root mechanism is the basis for encapsulation in web components: it separates the details of an element's implementation from the outside document. CSS styles and JavaScript inside the shadow root are scoped to that element. Inside of the shadow root, the implementation of the video tag is a sub-tree of elements which is not directly inline with the the document's top-level DOM. This means, for example, you can neither find nor directly address elements within the shadow root.

There are at least a few libraries that enable building web-components for current browsers, regardless of browser support for the standards-based web components support. The one I'll demonstrate quickly here is pulled directly from polymer's home page. I'm using this because it demonstrates what I believe is about as close to the implementation of the ideal for web components (from a consumer perspective) aside from having to use a polyfill for some browsers today:

<!-- Polyfill Web Components support for older browsers -->

<script src="components/webcomponentsjs/webcomponents-lite.min.js"></script>

<!-- Import element -->

<link rel="import" href="components/google-map/google-map.html">

<!-- Use element -->

<google-map latitude="37.790" longitude="-122.390"></google-map>

Again, the polyfill ensures that our browser will handle the subsequent tags. This library aims to support the core web components features in a way that is consistent with the core web-components as the standards evolve. The link tag is another standard web component feature, though Mozilla is not favoring supporting it further, that essentially imports an external definition which in this case is the definition of the tag. When all browsers support web components natively, things will hopefully be as simple as importing the reference to the component and then using it, as is done in the last statement above.

Hopefully this gives you a quick glimpse into how web components might make our web applications grow through a variety of new interface tools we can build on, extend, and easily utilize. In future articles, we'll look at examples of using the various building blocks of web components, and how the technology continues to evolve.

Today we will learn how to migrate content from a JSON file into Drupal using the Migrate Plus module. We will show how to configure the migration to read files from the local file system and remote locations. The example includes node, images, and paragraphs migrations. Let’s get started.

Example configuration of JSON source migration

Note: Migrate Plus has many more features. For example, it contains source plugins to import from XML files and SOAP endpoints. It provides many useful process plugins for DOM manipulation, string replacement, transliteration, etc. The module also lets you define migration plugins as configurations and create groups to share settings. It offers a custom event to modify the source data before processing begins. In today’s blog post, we are focusing on importing JSON files. Other features will be covered in future entries.

Getting the code

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD JSON source migration whose machine name is ud_migrations_json_source. It comes with four migrations: udm_json_source_paragraph, udm_json_source_image, udm_json_source_node_local, and udm_json_source_node_remote.

You can get the Migrate Plus module using composer: composer require 'drupal/migrate_plus:^5.0'. This will install the 8.x-5.x branch where new development will happen. This branch was created to introduce breaking changes in preparation for Drupal 9. As of this writing, the 8.x-4.x branch has feature parity with the newer branch. If your Drupal site is not composer-based, you can download the module manually.

Understanding the example set up

This migration will reuse the same configuration from the introduction to paragraph migrations example. Refer to that article for details on the configuration: the destinations will be the same content type, paragraph type, and fields. The source will be changed in today's example, as we use it to explain JSON migrations. The end result will again be nodes containing an image and a paragraph with information about someone’s favorite book. The major difference is that we are going to read from JSON. In fact, three of the migrations will read from the same file. The following snippet shows a reduced version of the file to get a sense of its structure:

{
  "data": {
    "udm_people": [
      {
        "unique_id": 1,
        "name": "Michele Metts",
        "photo_file": "P01",
        "book_ref": "B10"
      },
      {...},
      {...}
    ],
    "udm_book_paragraph": [
      {
        "book_id": "B10",
        "book_details": {
          "title": "The definite guide to Drupal 7",
          "author": "Benjamin Melançon et al."
        }
      },
      {...},
      {...}
    ],
    "udm_photos": [
      {
        "photo_id": "P01",
        "photo_url": "https://agaric.coop/sites/default/files/pictures/picture-15-1421176712.jpg",
        "photo_dimensions": [240, 351]
      },
      {...},
      {...}
    ]
  }
}

Note: You can literally swap migration sources without changing any other part of the migration.  This is a powerful feature of ETL frameworks like Drupal’s Migrate API. Although possible, the example includes slight changes to demonstrate various plugin configuration options. Also, some machine names had to be changed to avoid conflicts with other examples in the demo repository.

Migrating nodes from a JSON file

In any migration project, understanding the source is very important. For JSON migrations, there are two major considerations. First, where in the file hierarchy lies the data that you want to import. It can be at the root of the file or several levels deep in the hierarchy. Second, when you get to the array of records that you want to import, what fields are going to be made available to the migration. It is possible that each record contains more data than needed. For improved performance, it is recommended to manually include only the fields that will be required for the migration. The following code snippet shows part of the local JSON file relevant to the node migration:

{
  "data": {
    "udm_people": [
      {
        "unique_id": 1,
        "name": "Michele Metts",
        "photo_file": "P01",
        "book_ref": "B10"
      },
      {...},
      {...}
    ]
  }
}

The array of records containing node data lies two levels deep in the hierarchy. Starting with data at the root and then descending one level to udm_people. Each element of this array is an object with four properties:

  • unique_id is the unique identifier for each record within the data/udm_people hierarchy.
  • name is the name of a person. This will be used in the node title.
  • photo_file is the unique identifier of an image that was created in a separate migration.
  • book_ref is the unique identifier of a book paragraph that was created in a separate migration.

The following snippet shows the configuration to read a local JSON file for the node migration:

source:
  plugin: url
  data_fetcher_plugin: file
  data_parser_plugin: json
  urls:
    - modules/custom/ud_migrations/ud_migrations_json_source/sources/udm_data.json
  item_selector: data/udm_people
  fields:
    - name: src_unique_id
      label: 'Unique ID'
      selector: unique_id
    - name: src_name
      label: 'Name'
      selector: name
    - name: src_photo_file
      label: 'Photo ID'
      selector: photo_file
    - name: src_book_ref
      label: 'Book paragraph ID'
      selector: book_ref
  ids:
    src_unique_id:
      type: integer

The name of the plugin is url. Because we are reading a local file, the data_fetcher_plugin  is set to file and the data_parser_plugin to json. The urls configuration contains an array of file paths relative to the Drupal root. In the example, we are reading from one file only, but you can read from multiple files at once. In that case, it is important that they have a homogeneous structure. The settings that follow will apply equally to all the files listed in urls.

The item_selector configuration indicates where in the JSON file lies the array of records to be migrated. Its value is an XPath-like string used to traverse the file hierarchy. In this case, the value is data/udm_people. Note that you separate each level in the hierarchy with a slash (/).

fields has to be set to an array. Each element represents a field that will be made available to the migration. The following options can be set:

  • name is required. This is how the field is going to be referenced in the migration. The name itself can be arbitrary. If it contained spaces, you need to put double quotation marks (") around it when referring to it in the migration.
  • label is optional. This is a description used when presenting details about the migration. For example, in the user interface provided by the Migrate Tools module. When defined, you do not use the label to refer to the field. Keep using the name.
  • selector is required. This is another XPath-like string to find the field to import. The value must be relative to the location specified by the item_selector configuration. In the example, the fields are direct children of the records to migrate. Therefore, only the property name is specified (e.g., unique_id). If you had nested objects or arrays, you would use a slash (/) character to go deeper in the hierarchy. This will be demonstrated in the image and paragraph migrations.

Finally, you specify an ids array of field names that would uniquely identify each record. As already stated, the unique_id field servers that purpose. The following snippet shows part of the process, destination, and dependencies configuration of the node migration:

process:
  field_ud_image/target_id:
    plugin: migration_lookup
    migration: udm_json_source_image
    source: src_photo_file
destination:
  plugin: 'entity:node'
  default_bundle: ud_paragraphs
migration_dependencies:
  required:
    - udm_json_source_image
    - udm_json_source_paragraph
  optional: []

The source for the setting the image reference is src_photo_file. Again, this is the name of the field, not the label nor selector. The configuration of the migration lookup plugin and dependencies point to two JSON migrations that come with this example. One is for migrating images and the other for migrating paragraphs.

Migrating paragraphs from a JSON file

Let’s consider an example where the records to migrate have many levels of nesting. The following snippets show part of the local JSON file and source plugin configuration for the paragraph migration:

{
  "data": {
    "udm_book_paragraph": [
      {
        "book_id": "B10",
        "book_details": {
          "title": "The definite guide to Drupal 7",
          "author": "Benjamin Melançon et al."
        }
      },
      {...},
      {...}
    ]
}
source:
  plugin: url
  data_fetcher_plugin: file
  data_parser_plugin: json
  urls:
    - modules/custom/ud_migrations/ud_migrations_json_source/sources/udm_data.json
  item_selector: data/udm_book_paragraph
  fields:
    - name: src_book_id
      label: 'Book ID'
      selector: book_id
    - name: src_book_title
      label: 'Title'
      selector: book_details/title
    - name: src_book_author
      label: 'Author'
      selector: book_details/author
  ids:
    src_book_id:
      type: string

The plugin, data_fetcher_plugin, data_parser_plugin and urls configurations have the same values as in the node migration. The item_selector and ids configurations are slightly different to represent the path to paragraph records and the unique identifier field, respectively.

The interesting part is the value of the fields configuration. Taking data/udm_book_paragraph as a starting point, the records with paragraph data have a nested structure. Notice that book_details is an object with two properties: title and author. To refer to them, the selectors are book_details/title and book_details/author, respectively. Note that you can go as many level deeps in the hierarchy to find the value that should be assigned to the field. Every level in the hierarchy would be separated by a slash (/).

In this example, the target is a single paragraph type. But a similar technique can be used to migrate multiple types. One way to configure the JSON file is to have two properties. paragraph_id would contain the unique identifier for the record. paragraph_data would be an object with a property to set the paragraph type. This would also have an arbitrary number of extra properties with the data to be migrated. In the process section, you would iterate over the records to map the paragraph fields.

The following snippet shows part of the process configuration of the paragraph migration:

process:
  field_ud_book_paragraph_title: src_book_title
  field_ud_book_paragraph_author: src_book_author

Migrating images from a JSON file

Let’s consider an example where the records to migrate have more data than needed. The following snippets show part of the local JSON file and source plugin configuration for the image migration:

{
  "data": {
    "udm_photos": [
      {
        "photo_id": "P01",
        "photo_url": "https://agaric.coop/sites/default/files/pictures/picture-15-1421176712.jpg",
        "photo_dimensions": [240, 351]
      },
      {...},
      {...}
    ]
  }
}
source:
  plugin: url
  data_fetcher_plugin: file
  data_parser_plugin: json
  urls:
    - modules/custom/ud_migrations/ud_migrations_json_source/sources/udm_data.json
  item_selector: data/udm_photos
  fields:
    - name: src_photo_id
      label: 'Photo ID'
      selector: photo_id
    - name: src_photo_url
      label: 'Photo URL'
      selector: photo_url
  ids:
    src_photo_id:
      type: string

The plugin, data_fetcher_plugin, data_parser_plugin and urls configurations have the same values as in the node migration. The item_selector and ids configurations are slightly different to represent the path to image records and the unique identifier field, respectively.

The interesting part is the value of the fields configuration. Taking data/udm_photos as a starting point, the records with image data have extra properties that are not used in the migration. Particularly, the photo_dimensions property contains an array with two values representing the width and height of the image, respectively. To ignore this property, you simply omit it from the fields configuration. In case you wanted to use it, the selectors would be photo_dimensions/0 for the width and photo_dimensions/1 for the height. Note that you use a zero-based numerical index to get the values out of arrays. Like with objects, a slash (/) is used to separate each level in the hierarchy. You can go as far as necessary in the hierarchy.

The following snippet shows part of the process configuration of the image migration:

process:
  psf_destination_filename:
    plugin: callback
    callable: basename
    source: src_photo_url

JSON file location

When using the file data fetcher plugin, you have three options to indicate the location to the JSON files in the urls configuration:

  • Use a relative path from the Drupal root. The path should not start with a slash (/). This is the approach used in this demo. For example, modules/custom/my_module/json_files/example.json.
  • Use an absolute path pointing to the CSV location in the file system. The path should start with a slash (/). For example, /var/www/drupal/modules/custom/my_module/json_files/example.json.
  • Use a stream wrapper.

Being able to use stream wrappers gives you many more options. For instance:

  • Files located in the public, private, and temporary file systems managed by Drupal. This leverages functionality already available in Drupal core. For example: public://json_files/example.json.
  • Files located in profiles, modules, and themes. You can use the System stream wrapper module or apply this core patch to get this functionality. For example, module://my_module/json_files/example.json.
  • Files located in remote servers including RSS feeds. You can use the Remote stream wrapper module to get this functionality. For example, https://understanddrupal.com/json-files/example.json.

Migrating remote JSON files

Migrate Plus provides another data fetcher plugin named http. You can use it to fetch files using the http and https protocols. Under the hood, it uses the Guzzle HTTP Client library. In a future blog post we will explain this data fetcher in more detail. For now, the udm_json_source_node_remote migration demonstrates a basic setup for this plugin. Note that only the data_fetcher_plugin and urls configurations are different from the local file example. The following snippet shows part of the configuration to read a remote JSON file for the node migration:

source:
  plugin: url
  data_fetcher_plugin: http
  data_parser_plugin: json
  urls:
    - https://api.myjson.com/bins/110rcr
  item_selector: data/udm_people
  fields: ...
  ids: ...

And that is how you can use JSON files as the source of your migrations. Many more configurations are possible. For example, you can provide authentication information to get access to protected resources. You can also set custom HTTP headers. Examples will be presented in a future entry.

What did you learn in today’s blog post? Have you migrated from JSON files before? If so, what challenges have you found? Did you know that you can read local and remote files? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

Next: Migrating XML files into Drupal

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors: Drupalize.me by Osio Labs has online tutorials about migrations, among other topics, and Agaric provides migration trainings, among other services.  Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.