In my case I am looking for Content referenced from field_office_location
. This is the entity reference field on my Event
content type, and is the field that my view will use for comparison while querying data.
Click add and configure relationships
and then Apply
on the next screen.
Click the Add
button next to Contextual Filters
The Climate Justice Action Map (CJAM) is a custom mapping tool that pulls 350 events and groups from multiple data sources (eg: ActionKit, EveryAction, CiviCRM) and displays an interactive map supporters can use to get involved.
It can be embedded within websites with many customization options (eg: preset the map center to a location,, show the map’s text and buttons in a different language, show only events related to a particular campaign, etc.).
It uses Mapbox for the map, OpenStreetMaps for the tileset, and Google Maps for the search lookup.
The CJAM Extract, Transform Load (ETL) Application is a data processor written in Python that runs every 15 minutes and pulls in data from those many sources (eg: EveryAction, CiviCRM) via APIs and direct SQL queries. It writes the combined event and group data to a JSON data file hosted on Amazon S3, which is then consumed by the CJAM JavaScript.
We met with 350 in mid-June, with the strikes set for September 20th and organizing pushes in July and August. With tight deadlines, a new team and a new codebase, we quickly got to work understanding the goals of the map, its current implementation and what needed to be done for each milestone.
On projects demanding quick turnarounds it's tempting to dive head first into the issue queue. We know though, that a project is only successful if everyone is aligned on the overall goals of the project. Luckily, the product team already had excellent documentation (they even had a slideshow!) on what the purpose of the climate action map and its key audiences.
Goals
Key Audiences
These documents were great to have coming into our kickoff call.
Getting familiar with the inner workings of the climate action map was particularly challenging because the code was essentially in two states: the main branch with the original custom JavaScript and a refactor branch where the transition to React.js was happening. React is one of the most popular and widely used frameworks. Converting the application to React made the code easier to maintain and build upon. The original volunteer developer had begun this process of conversion and there were new features written in the new React way, unavailable until the refactoring was complete.
Mauricio and Chris met with him to get clear on how to see the transition to the end. They then set to familiarizing themselves with the codebase and refactoring along the way. By understanding, for example, a long complex function, and then rewriting it into smaller discrete functions, we were able to simplify the code, wrap our head around its inner workings and make it easier to work with for the next developer to join the project.
When first working with a codebase it takes time to understand why a new change isn't sticking or why an error is occurring. Logs are a developer's best friend when it comes to debugging. Unfortunately, the logging available was stark. The ETL had a running log, but wasn't saving to a file for future reference or easy retrieval. Chris made the error log easy to reference and even added Slack integration sending a message to the team whenever an error occurred - helping people quickly respond to issues.
350.org has hundreds of chapters, spread across seven continents, with members speaking dozens of languages. Their mapping tool was built with this diversity in mind. It serves as a powerful storytelling device (goal number one), with a single map conveying the impressive reach of the movement, and not making assumptions as to where a visitor is or what they're looking for.
On the other hand, mobilizing is most effective when it comes from people we know, from communities we're part of. As such, the map can live in more localized contexts, showing just events and groups relevant to a particular scenario. For example, the 350 Colorado chapter can display a map zoomed into the Mountain West, while 350 France can show a map with just events in French.
These custom maps are created using embed parameters. To do this, a 350.org organizer pasted the map onto a page using an iframe, passing in parameters such as language, location and data source by including a query parameter in the url.
However, this approach was cumbersome, technically prohibitive and error prone. We dropped the iframe approach and replaced it with a series of shortcodes, a more intuitive method, that make direct calls to the Climate Action Map API to render a map specific to an organizer's needs.
We added support for the following short codes:
Now organizers can create any number of maps with criteria meeting their campaign or community's specific needs.
With so many different events happening at any given time, the map risked overwhelming visitors looking to get involved. 350.org's designer Matthew Hinders-Anderson came up with the solution of applying different map pin styles to events depending on when they were happening. Past events have a subdued teal, while current and future events have a strong teal. To emphasize the storytelling (goal number) of the map, current events throb.
To accomplish this, we needed to calculate an event's date and time relative to the current time. Unfortunately, many of the events had no timezone associated with them. However, they did all have some form of location available. Chris found a handy Python tool called timezonefinder that calculates an event's timezone based on latitude and longitude.
With the timezone in hand, Mauricio could then apply the different colors (and flashing) based on the event's time relative to now.
With so many events organized, we wanted potential strikers to find an event to attend quickly. Map embedders found though, that sometimes searches would result in an empty map, despite events being nearby. This is one of the many challenges of designing interactive maps. One example was a visitor living in a nearby suburb of Boston. A search for Allston would turn up nothing, despite there being multiple events within a 5 mile radius. We fine tuned the zoom in behavior to better show nearby events.
There were still edge cases though. We addressed this by showing a "Zoom out" button if a visitor came up empty. Clicking that zooms a user out to the nearest result.
The mobilization plan was to push activists and organizers to plan events from June through August. Then rally as many people to RSVP to the newly created events from August up until the big days: September 20th and September 27th. We rolled out the embed code functionality in August which organizers put to good use, embedding local and regional specific maps on local 350 group pages and climate strike specific websites they had built.
The map was so popular, that other organizations asked if they could embed it on their own sites - increasing the mobilization points and audiences reached. That we were able to do this speaks to the importance of defending the open web and free and open source software that allows for the decentralized sharing and using of tools.
On the first day of the strikes the pin styles came to life, lighting up the many walkouts, rallies and protests happening that day. It was a go to graphic for journalists and supporters on social media to share when reporting on the unprecedented participation.
Ultimately, the numbers we saw was a testament to the long, hard work organizers constantly engage in and the urgency of the moment we are in. However, with tools like the Climate Justice Action Map, built by technology activists alongside the organizers using them, we deepen and widen the mobilizing possible. And in these times of massive wealth inequality, deep political corruption, and closing window of time for the bold action we need, disrupting the status quo is more important than ever before.
Special thanks to the 350.org product team members Kimani Ndegwa, Matthew Hinders-Anderson, Nadia Gorchakova, and suzi grishpul for their vision, management of the project and design and development leadership.
I have watched in sadness and sometimes anger as large non-profit after large non-profit collectively poured enough money into Raiser's Edge and other Blackbaud licenses and consulting services to fund many feature enhancements for the main FLOSS alternative, CiviCRM— improvements which would then be free for everyone, forever.
I have never met anyone who actually likes Blackbaud products and services. However, many organizations felt they were the only safe option, in the sense of claiming to have everything an enterprise needs.
Now, Blackbaud failed to secure its servers sufficiently and large amounts of its clients' donor data, including personally identifying information, was obtained in a ransomware attack. This was back in May. Blackbaud ultimately paid the ransomer to allegedly destroy the data they obtained— and only late in July finally told their customers what happened.
As the American Civil Liberties Union wrote to all its supporters, current and past (including myself), this is a rotten situation:
In all candor, we are frustrated with the lack of information we've received from Blackbaud about this incident thus far. The ACLU is doing everything in our power to ascertain the full nature of the breach, and we are actively investigating the nature of the data that was involved, details of the incident, and Blackbaud's remediation plans.
We are also exploring all options to ensure this does not happen again, including revisiting our relationship with Blackbaud.
Fortunately, none of Agaric's clients are affected. But we hope everyone using or considering using Blackbaud and other proprietary services for their most important data will look at free/libre open source solutions. Code you (or your technology partner) can see and contribute to means you truly can do anything. And if you put aside the money that would be gouged out of your organization by the eTapestry, Kintera, and Convio-swallowing monopolist Blackbaud, you probably can afford to.
At Agaric, we have recently been working with CiviCRM more recently (building on experience dating back fifteen years!) and we know our friends at Palante Technology Cooperative and myDropWizard are well-versed in CiviCRM, as are many others. Please consider this when weighing your options for maintaining a strong, ethical relationship with your supporters, and let us know if you have any thoughts or questions!
MASS Design Group is an innovative non-profit architecture firm that collaborates with the communities and individuals they serve. Their belief that “Architecture is never neutral. It either heals or hurts.” is a powerful parallel to our own assertion that digital architecture shapes how we communicate and congregate online.
Everything we do at Agaric uses software that is free as in freedom - free to install, use and repurpose for your own needs.
Our work with Portside has already translated into improving and even creating the following Drupal modules:
* Give - on-site donation forms and reporting
* Social Post Twitter - Re-post content from your website to your Twitter account
* Social Share Facebook - Re-post contnt from your website to your Facebook page
* Minimal HTML - A text format handy for short text fields
So, when you donate to Portside's fundraising campaign, you are both supporting independent journalism and the open-source software that benefits other independent outlets and websites.
It could be in Articles of Organization, ByLaws or a simple contractual agreement between members or even a handshake. A cooperative or collective is defined by the members.
Talk to people in your personal network about your goal.
Let former co-workers know you are forming or seeking to work with a cooperative.
There are meetups (meetup.com) or you could start one in your area.
Reach out to mailing lists you are on and ask if people are interested in working collectively.
Food:
Worker coops:
usworker.coop - member-directory
Open Directory search for all types of coops:
and Twitter - https://twitter.com/hashtag/cooperatives
Encourage pooled funds from successful cooperatives to help bootstrap new proposed cooperatives
Get involved in conversations, and create conversations. Let others know you are interested in cooperative work experiences and you are seeking information and connections.
There are four principles, freedoms, that define free software, the building blocks of this Digital Commons resource we all rely on.
When software is built this way, it protects us as users from malicious backdoors compromising our security, proprietary algorithms obscuring what we see and don't see, and predatory vendors locking us into expensive contracts. It also democratizes our technology - making it free for anyone to install and make use of. Examples of software in the digital commons include the Firefox browser, Linux operating system, and MediaWiki (which powers Wikipedia).
Using free software doesn't automatically mean that one is fully participating in the Digital Commons. For example, we use Drupal, Django, and WordPress to build websites. It is common for sites to then add on custom code, or configure their site in unique ways - source code that is hidden from the general public.
The diagram above shows an example website that has most of its software within the Digital Commons. However, there is some custom code (code written by someone that hasn't been released back into the commons) and some proprietary software integrating with the site.
Taking a closer look at who is maintaining and contributing to the various projects, we see that the software in the Digital Commons has many more people behind it. When something is free and open, then communities of literally thousands of people can help maintain the software. When something is custom code, only the original creator and their coworkers can maintain it (poor David). And when it comes to proprietary software, we're handing complete control over to the company who owns it.
Freedom 1, the freedom to study how a program works, ensures that site visitors and users know exactly what a website or app is doing. Even if a website starts out using free software, it's possible to extend it to do all sorts of malicious things. Sharing one's code with the world is a way to communicate transparency. Note that, this is separate from one's data, such as user passwords and personal information - that stays under lock and key.
Keeping all code written out in the open also adds a layer of auditability regarding quality. We follow software development best practices and to back that up, we share that code with the world. Besides, best practices aren't always cut and dried, and there are often opportunities to make good code, great.
When possible, we write and use what is called "contributed code." This is code that has been written in a generalizable way so that others can also benefit from it. Often, a tool already exists to solve a problem. Other times a tool might get a project 90% of the way there. Some might decide to meet their unique case by building something from scratch. We, however, prefer to build upon existing solutions.
For example, when we built the ability for Portside to cross-post their articles to Twitter, we did that by improving the Social Post Twitter module - a tool anyone running a Drupal site can use as well. We could have written that as custom code, only for Portside to use. However, we took the time to contribute this back to the community.
Contributing code to the Digital Commons is not just a kind thing to do; it helps strengthen the software we rely on. As mentioned above, now the Social Post Twitter module is available for others to audit and make improvements to. While custom code is maintained by whoever initially wrote it, contributing code back to the commons opens that software up to maintenance and improvement from a wider community. The more sites using that software, the more attention and care it receives.
It can take more time to contribute code back to the commons than creating a one-off solution. For nonprofits and other organizations on small budgets, it may seem impractical or foolish to take the extra time to contribute code. However, we've found that the stability and future improvements gained by keeping the code in the commons is well worth it. It also ensures your software is maintainable moving forward. We've seen nonprofits get burned time and again by developers who choose to write custom code that is then difficult for others (or even the original authors!) to come in and maintain. By keeping your software in the commons, you protect your projects with the strength of the free software community.
Funding a solution that will then be shared and available for others to use for free can again sound foolishly selfless. Why should we let other organizations use for free what we had to commit significant resources to? It can feel odd to sponsor work others get for free. However, it's important to keep in mind that no software is built from scratch. We all stand on the shoulders of those who came before us. The functionality free software already provides, was paid for by someone else – either with money or volunteer time. When our clients are generous enough to agree to contribute their solutions back to the commons, we are sure to recognize them for it. This both lets others know the stewardship and leadership they're making in the Digital Commons.
Now that you know more of the Digital Commons you are part of, we hope you join us in taking care of and benefiting from it. If you're a freelancer or agency, look for opportunities to change your workflow to deepen your participation in the commons. If you're an organization, audit your existing technology stack. Are there tools you use which are proprietary that you could be using free software for instead? Do you have custom solutions that would be better off contributed to the commons? Is your website or app's source code posted for people to audit and learn from? The next time you budget for a new improvement, discuss how that could be contributed back. Being part of the Digital Commons makes the software we all use stronger.
The Migrate API is a very flexible and powerful system that allows you to collect data from different locations and store them in Drupal. It is, in fact, a full-blown extract, transform, and load (ETL) framework. For instance, it could produce CSV files. Its primary use is to create Drupal content entities: nodes, users, files, comments, etc. The API is thoroughly documented, and their maintainers are very active in the #migration slack channel for those needing assistance. The use cases for the Migrate API are numerous and vary greatly. Today we are starting a blog post series that will cover different migrate concepts so that you can apply them to your particular project.
Extract, transform, and load (ETL) is a procedure where data is collected from multiple sources, processed according to business needs, and its result stored for later use. This paradigm is not specific to Drupal. Books and frameworks abound on the topic. Let’s try to understand the general idea by following a real life analogy: baking bread. To make some bread, you need to obtain various ingredients: wheat flour, salt, yeast, etc. (extracting). Then, you need to combine them in a process that involves mixing and baking (transforming). Finally, when the bread is ready, you put it into shelves for display in the bakery (loading). In Drupal, each step is performed by a Migrate plugin:
The extract step is provided by source plugins.
The transform step is provided by process plugins.
The load step is provided by destination plugins.
As it is the case with other systems, Drupal core offers some base functionality which can be extended by contributed modules or custom code. Out of the box, Drupal can connect to SQL databases including previous versions of Drupal. There are contributed modules to read from CSV files, XML documents, JSON and SOAP feeds, WordPress sites, LibreOffice Calc and Microsoft Office Excel files, Google Sheets, and much more.
The list of core process plugins is impressive. You can concatenate strings, explode or implode arrays, format dates, encode URLs, look up already migrated data, among other transform operations. Migrate Plus offers more process plugins for DOM manipulation, string replacement, transliteration, etc.
Drupal core provides destination plugins for content and configuration entities. Most of the time, targets are content entities like nodes, users, taxonomy terms, comments, files, etc. It is also possible to import configuration entities like field and content type definitions. This is often used when upgrading sites from Drupal 6 or 7 to Drupal 8. Via a combination of source, process, and destination plugins, it is possible to write Commerce Product Variations, Paragraphs, and more.
Technical note: The Migrate API defines another plugin type: `id_map`. They are used to map source IDs to destination IDs. This allows the system to keep track of records that have been imported and roll them back if needed.
Performing a Drupal migration is a two step process: writing the migration definitions and executing them. Migration definitions are written in YAML format. These files contain information about how to fetch data from the source, how to process the data, and how to store it in the destination. It is important to note that each migration file can only specify one source and one destination. That is, you cannot read form a CSV file and a JSON feed using the same migration definition file. Similarly, you cannot write to nodes and users from the same file. However, you can use as many process plugins as needed to convert your data from the format defined in the source to the format expected in the destination.
A typical migration project consists of several migration definition files. Although not required, it is recommended to write one migration file per entity bundle. If you are migrating nodes, that means writing one migration file per content type. The reason is that different content types will have different field configurations. It is easier to write and manage migrations when the destination is homogeneous. In this case, a single content type will have the same fields for all the elements to process in a particular migration. Once all the migration definitions have been written, you need to execute the migrations. The most common way to do this is using the Migrate Tools module, which provides Drush commands and a user interface (UI) to run migrations. Note that the UI for running migrations only detect those that have been defined as configuration entities using the Migrate Plus module. This is a topic we will cover in the future. For now, we are going to stick to Drupal core’s mechanisms of defining migrations. Contributed modules like Migrate Scheduler, Migrate Manifest and Migrate Run offer alternatives for executing migrations.
Next: Writing your first Drupal migration
This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors: Drupalize.me by Osio Labs has online tutorials about migrations, among other topics, and Agaric provides migration trainings, among other services. Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.
Translation provided by Colette Morya
In Drupal 7 it was useful to do things like this:
function mymodule_content() {
$links[] = l('Google', 'http://www.google.com');
$links[] = l('Yahoo', 'http://www.yahoo.com');
return t('Links: !types', array('!types' => implode(', ', $links)));
}
In this case, we are using the exclamation mark to pass the $links
into our string but unfortunately, Drupal 8 doesn't have this option in the FormattableMarkup::placeholderFormat(), the good news is that even without this there is a way to accomplish the same thing.
Today we will learn how to migrate addresses into Drupal. We are going to use the field provided by the Address module which depends on the third-party library commerceguys/addressing
. When migrating addresses you need to be careful with the data that Drupal expects. The address components can change per country. The way to store those components also varies per country. These and other important consideration will be explained. Let’s get started.
You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD address
whose machine name is ud_migrations_address
. The migration to execute is udm_address
. Notice that this migration writes to a content type called UD Address
and one field: field_ud_address
. This content type and field will be created when the module is installed. They will also be removed when the module is uninstalled. The demo module itself depends on the following modules: address
and migrate
.
Note: Configuration placed in a module’s config/install
directory will be copied to Drupal’s active configuration. And if those files have a dependencies/enforced/module
key, the configuration will be removed when the listed modules are uninstalled. That is how the content type and fields are automatically created and deleted.
The recommended way to install the Address module is using composer: composer require drupal/address
. This will grab the Drupal module and the commerceguys/addressing
library that it depends on. If your Drupal site is not composer-based, an alternative is to use the Ludwig module. Read this article if you want to learn more about this option. In the example, it is assumed that the module and its dependency were obtained via composer. Also, keep an eye on the Composer Support in Core Initiative as they make progress.
The example will migrate three addresses from the following countries: Nicaragua, Germany, and the United States of America (USA). This makes it possible to show how different countries expect different address data. As usual, for any migration you need to understand the source. The following code snippet shows how the source and destination sections are configured:
source:
plugin: embedded_data
data_rows:
- unique_id: 1
first_name: 'Michele'
last_name: 'Metts'
company: 'Agaric LLC'
city: 'Boston'
state: 'MA'
zip: '02111'
country: 'US'
- unique_id: 2
first_name: 'Stefan'
last_name: 'Freudenberg'
company: 'Agaric GmbH'
city: 'Hamburg'
state: ''
zip: '21073'
country: 'DE'
- unique_id: 3
first_name: 'Benjamin'
last_name: 'Melançon'
company: 'Agaric SA'
city: 'Managua'
state: 'Managua'
zip: ''
country: 'NI'
ids:
unique_id:
type: integer
destination:
plugin: 'entity:node'
default_bundle: ud_address
Note that not every address component is set for all addresses. For example, the Nicaraguan address does not contain a ZIP code. And the German address does not contain a state. Also, the Nicaraguan state is fully spelled out: Managua
. On the contrary, the USA state is a two letter abbreviation: MA
for Massachusetts
. One more thing that might not be apparent is that the USA ZIP code belongs to the state of Massachusetts. All of this is important because the module does validation of addresses. The destination is the custom ud_address
content type created by the module.
The Address field has 13 subfields available. They can be found in the schema() method of the AddresItem class. Fields are not required to have a one-to-one mapping between their schema and the form widgets used for entering content. This is particularly true for addresses because input elements, labels, and validations change dynamically based on the selected country. The following is a reference list of all subfields for addresses:
langcode
for language code.country_code
for country.administrative_area
for administrative area (e.g., state or province).locality
for locality (e.g. city).dependent_locality
for dependent locality (e.g. neighbourhood).postal_code
for postal or ZIP code.sorting_code
for sorting code.address_line1
for address line 1.address_line2
for address line 2.organization
for company.given_name
for first name.additional_name
for middle name.family_name
for last name:Properly describing an address is not trivial. For example, there are discussions to add a third address line component. Check this issue if you need this functionality or would like to participate in the discussion.
In the example, only 9 out of the 13 subfields will be mapped. The following code snippet shows how to do the processing of the address field:
field_ud_address/given_name: first_name
field_ud_address/family_name: last_name
field_ud_address/organization: company
field_ud_address/address_line1:
plugin: default_value
default_value: 'It is a secret ;)'
field_ud_address/address_line2:
plugin: default_value
default_value: 'Do not tell anyone :)'
field_ud_address/locality: city
field_ud_address/administrative_area: state
field_ud_address/postal_code: zip
field_ud_address/country_code: country
The mapping is relatively simple. You specify a value for each subfield. The tricky part is to know the name of the subfield and the value to store in it. The format for an address component can change among countries. The easiest way to see what components are expected for each country is to create a node for a content type that has an address field. With this example, you can go to /node/add/ud_address
and try it yourself. For simplicity sake, let’s consider only 3 countries:
Pay very close attention. The available subfields will depend on the country. Also, the form labels change per country or language settings. They do not necessarily match the subfield names. Moreover, the values that you see on the screen might not match what is stored in the database. For example, a Nicaraguan address will store the full department name like Managua
. On the other hand, a USA address will only store a two-letter code for the state like MA
for Massachusetts
.
Something else that is not apparent even from the user interface is data validation. For example, let’s consider that you have a USA address and select Massachusetts
as the state. Entering the ZIP code 55111
will produce the following error: Zip code field is not in the right format.
At first glance, the format is correct, a five-digits code. The real problem is that the Address module is validating if that ZIP code is valid for the selected state. It is not valid for Massachusetts. 55111
is a ZIP code for the state of Minnesota which makes the validation fail. Unfortunately, the error message does not indicate that. Nine-digits ZIP codes are accepted as long as they belong to the state that is selected.
Note: If you are upgrading from Drupal 7, the D8 Address module offers a process plugin to upgrade from the D7 Address Field module.
Values for the same subfield can vary per country. How can you find out which value to use? There are a few ways, but they all require varying levels of technical knowledge or access to resources:
value
attribute for the option
that you want to select. This will contain the two-letter code for countries, the two-letter abbreviations for USA states, and the fully spelled string for Nicaraguan departments.devel
tab of the node to inspect how the values are stored. It is not recommended to have the devel
module in a production site. In fact, do not deploy the code even if the module is not enabled. This approach should only be used in a local development environment. Make sure no module or configuration is committed to the repo nor deployed.node__field_[field_machine_name]
, if migrating nodes. First create some example nodes via the user interface and then query the table. You will see how Drupal stores the values in the database.If you know a better way, please share it in the comments.
With version 8 came many changes in the way Drupal is developed. Now there is an intentional effort to integrate with the greater PHP ecosystem. This involves using already existing libraries and frameworks, like Symfony. But also, making code written for Drupal available as external libraries that could be used by other projects. commerceguys\addressing
is one example of a library that was made available as an external library. That being said, the Address module also makes use of it.
Explaining how the library works or where its fetches its database is beyond the scope of this article. Refer to the library documentation for more details on the topic. We are only going to point out some things that are relevant for the migration. For example, the ZIP code validation happens at the validatePostalCode() method of the AddressFormatConstraintValidator class. There is no need to know this for a migration project. But the key thing to remember is that the migration can be affected by third-party libraries outside of Drupal core or contributed modules. Another example, is the value for the state subfield. Address module expects a subdivision
as listed in one of the files in the resources/subdivision
directory.
Does the validation really affect the migration? We have already mentioned that the Migrate API bypasses Form API validations. And that is true for address fields as well. You can migrate a USA address with state Florida
and ZIP code 55111
. Both are invalid because you need to use the two-letter state code FL
and use a valid ZIP code within the state. Notwithstanding, the migration will not fail in this case. In fact, if you visit the migrated node you will see that Drupal happily shows the address with the data that you entered. The problems arrives when you need to use the address. If you try to edit the node you will see that the state will not be preselected. And if you try to save the node after selecting Florida
you will get the validation error for the ZIP code.
This validation issues might be hard to track because no error will be thrown by the migration. The recommendation is to migrate a sample combination of countries and address components. Then, manually check if editing a node shows the migrated data for all the subfields. Also check that the address passes Form API validations upon saving. This manual testing can save you a lot of time and money down the road. After all, if you have an ecommerce site, you do not want to be shipping your products to wrong or invalid addresses. ;-)
Technical note: The commerceguys/addressing
library actually follows ISO standards. Particularly, ISO 3166 for country and state codes. It also uses CLDR and Google's address data. The dataset is stored as part of the library’s code in JSON format.
The Address module offer two more fields types: Country
and Zone
. Both have only one subfield value
which is selected by default. For country, you store the two-letter country code. For zone, you store a serialized version of a Zone object.
What did you learn in today’s blog post? Have you migrated address before? Did you know the full list of subcomponents available? Did you know that data expectations change per country? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with others.
Next: Introduction to paragraphs migrations in Drupal
This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors: Drupalize.me by Osio Labs has online tutorials about migrations, among other topics, and Agaric provides migration trainings, among other services. Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.