Notas:
Diapositivas - https://github.com/fiqus/FIT-talk-en
## FACTTIC - Federación Argentina de Cooperativas de Trabajo de Tecnología, Innovación y Conocimiento
* Lista de correo sólo para miembros
* Mattermost (chat de código abierto)
* Reunión mensual de la junta virtual (cualquier miembro puede asistir)
* Reuniones anuales cara a cara
## FIT
* Un proyecto dentro de FACTTIC donde las cooperativas comparten el estado de los proyectos en los que están trabajando
* Evolucionado a un área donde las cooperativas comparten proyectos
* Para unirse a la FIT, tienes que ser un miembro FACTTIC
* Reuniones virtuales mensuales
* Canal Mattermost para el diálogo en curso
* Las cooperativas tienen diferentes habilidades/servicios, pero cuando hay superposición, trata de no competir entre sí y determina las "necesidades" de cada cooperativa.
### Escenario 1: El proyecto demanda más trabajadores de los que tiene la cooperativa
* Cuando hay una necesidad de ayuda, el proyecto se comparte en FIT
* Las cooperativas pueden solicitar unirse al proyecto
* Los candidatos son evaluados y uno es elegido
* El cliente es informado y debe estar de acuerdo
* La coordinación del proyecto está dirigida por la cooperativa inicial
* El acuerdo comercial es manejado sólo por la cooperativa inicial
### Escenario 2: El cliente necesita que se haga el trabajo, pero Coop decide no tomarlo
* Esto podría suceder porque la cooperativa inicial no tiene los recursos o declina por una razón estratégica
* El proyecto se comparte con la FIT
* Si no hay cooperativas interesadas, se le dice al cliente que no hay nadie disponible.
* Si una cooperativa está interesada, el contacto de esa cooperativa se comparte con el cliente.
* Si más de una cooperativa está interesada, entonces preguntamos, ¿este proyecto requiere más de un trabajador?
* Si sólo necesita un trabajador, entonces la cooperativa que más lo necesita lo recibe.
* Si requiere más de un trabajador, entonces las cooperativas se coordinan entre sí para completar el trabajo.
## Case Studies
### Betterez
* Cliente canadiense
* Plataforma de gestión de reservas y billetes
* Tecnologías: MongoDB, NodeJS, VueJS y Elixir
* Necesitaba más trabajo del que Fiqus podía proporcionar
* 30 desarrolladores con 7 cooperativas diferentes
* Fiqus maneja las cuestiones financieras como las diferentes tarifas de los diferentes servicios
### Receptivi
* Cliente canadiense
* La página web muestra en tiempo real las percepciones psicológicas del personal
* Tenía más trabajo, pero Fiqus se negó a asumir el trabajo
* El trabajo se compartió en la FIT
* 3 desarrolladores de 2 cooperativas
### Mall Plaza
* Cliente chileno
* Aplicación móvil que muestra los servicios del centro comercial
*Técnica: Reaccionar Nativo, PostgreSQL, Frasco
## Onapsis
* Cliente argentino
* Sistema web que muestra alertas de vulnerabilidad en los servidores
* 2 cooperativas
## FIT Internacional
* Queremos replicar este modelo a nivel internacional.
1. Compartir este modelo con otros a fin de mejorarlo y difundir la conciencia
* Presentando en el Show and Tell
2. Construir relaciones de confianza
3. Conocerse en persona, pasar tiempo juntos
* Viajando al Reino Unido, reunión con la federación, COTECH
* Compartir experiencias después del viaje
## Preguntas y respuestas:
P: ¿Alguna vez ha tenido la resistencia de un cliente al entregar el trabajo a otra cooperativa?
R: Hay veces que los clientes no entienden las cooperativas y la cooperación entre ellas. Explicamos el beneficio y compartimos estudios de casos. Si hay una fecha límite que debe cumplirse, es más rápido traer un equipo con experiencia previa trabajando con la empresa original que tratar de encontrar una empresa completamente diferente.
P: ¿Cuánto comparten acerca de las múltiples cooperativas que trabajan en un proyecto?
R: Si son sólo unas pocas horas, no vale la pena hablar de ello. Sin embargo, la mayoría de las veces es importante compartir esa información y usarla como una oportunidad educativa para demostrar la fuerza de las cooperativas trabajando juntas.
* Una vez que los clientes ven el resultado de la cooperación se dan cuenta de que es una buena manera de enfocar el trabajo.
* La simplicidad del proceso es hermosa.
P: ¿Cómo se comparten los costos de desarrollo de los negocios?
R: La cooperativa que comparte el proyecto puede bajar sus tarifas durante el proceso de aceleración.
* Este es un aspecto que podría mejorarse.
* Lo más importante es ser transparente y comunicar mucho.
* Mantener el espíritu de generosidad fluyendo.
* Cuando la cooperación es exitosa, se construye la confianza con el cliente.
* Usar una herramienta para analizar los presupuestos y el progreso de los proyectos y prever la disponibilidad.
* La cooperación también asegura la calidad, los trabajadores de confianza se unirán al proyecto.
Find It makes it easier for a small team in government to make sure that there are resources available for a variety of residents' needs.
We can look at the recent popularity of some widely used platforms like Zoom and ask ourselves some questions as to why we still use them when we know a lot of terrible things about them. Agaric prefers to use a free/libre video chat software called BigBlueButton for many reasons, the first one being the licensing, but there are many reasons.
Zoom has had some major technology failures, which the corporation is not liable to disclose. At one point, a vulnerability was discovered in the desktop Zoom client for MacOS that allowed hackers to start your webcam remotely and launch you into a meeting without your permission. The company posted a note saying that they fixed the issue. Unfortunately, the Zoom source code is proprietary and we are not even allowed to look at it. There is no way for the community to see how the code works or to verify that the fix was comprehensive.
The Zoom Corporation stated early on that the software was encrypted end-to-end (E2EE) from your device to the recipient's device. This was untrue at the time, but the company states that it has been corrected for users on their client app. While it is no longer true that E2EE is unsupported, it does require that you use the proprietary Zoom client for E2EE to work. Without E2EE, any data that is retrieved on its way from your computer to a server can be accessed! The only real security is knowing the operators of your server. This is why Agaric uses trusted sources like MayFirst.org for most of our projects and we have a relationship with our BigBlueButton host. The Intercept also revealed that Zoom users that dial in on their phone are NOT encrypted at all
BigBlueButton does not have a client app and works in your browser, so there is no E2EE. The idea for E2EE is that with it, you "do not have to trust the server operator and you can rely on E2EE" because the model implies that every client has keys that are protecting the transferred data. However: you MUST still use a proprietary client in order to get the benefits of E2EE support, so once again you MUST trust Zoom as you have no permission to examine the app to determine that the keys are not being shared with Zoom.
Of course there is always the fact that hackers work day and night to corrupt E2EE and a Corporation is not obligated to tell you the customer every time there has been a security breach, and this information is usually buried in the terms of service they post - sometimes with a note saying the terms are subject to change and updates. A Corporation is not obligated to tell you, the customer when there has been a security breach" unless any personal information is exposed. There are now mandatory timely disclosure requirements for all states: https://www.ncsl.org/research/telecommunications-and-information-technology/security-breach-notification-laws.aspx ...Can Zoom really be trusted? As with some laws, the fine that is applied is low and affordable and subject to the interpretation of the courts and the status of knowledge your lawyer is privvy to - meaning most Corporations normally have a battery of lawyers to interpret the law and drag the case out until you are... broke.
In the case of BigBlueButton encryption, E2EE would only make sense if there are separate clients using an API to connect to the BBB server so a user does not have to trust the BBB server operator. If the user trusts the server operator, then there would be no need for E2EE." Lesson learned: It is always best practice to know and trust your server hosts as they are the ones that have the keys to your kingdom.
Some technology analysts consider Zoom software to be malware. Within companies that use Zoom, employers are even able to monitor whether or not you are focusing on the computer screen during meetings which seems excessively intrusive. Speaking of intrusive, the Zoom Corporation also shares your data with FaceBook, even if you do not have a FB account - that could be a whole blog in itself, but just being aware of some of the vulnerabilities is a good thing to pass on. Some of the bad stuff remains even if you uninstall the Zoom app from your device! Even though a class action suit was filed over privacy issues, the company stock still continued to rise.
Those are many reasons why we do not support Zoom. But there are also many reasons why we prefer BBB over Zoom. Besides, BBB has many great features that Zoom lacks:
1. Easily see who is speaking when their name appears above the presentation.
2. Chat messages will remain if you lose your connection or reload and rejoin the room.
3. Video is HD quality and you can easily focus on a persons webcam image.
4. Collaborative document writing on a shared Etherpad.
5. Easily share the presenter/admin role with others in the room.
6. Write closed captions in many languages, as well as change the language of the interface.
7. An interactive whiteboard for collaborative art with friends!
One huge advantage of free software, like BBB, is that you can usually find their issue queue where you can engage with the actual developers to report bugs and request feature enhancements. Here is a link to the BigBlueButton issue queue.
So, why do people keep using a platform like Zoom, even though there are many features in BigBlueButton that are much better?
There is very little publicity for free software and not many know it exists and that there are alternative solutions. You can find some great suggestions of software and switch to it by using this site called switching.software. The marketing budget for Zoom is large and leads you to believe it has everything you will need. Sadly their budget grows larger everyday with the money people pay for subscriptions to the platform. As a result, many people go with it as it is already used by their friends and colleagues, even though there are reports of irresponsible behavior by the Zoom Corporation. This is why the New York school system does not use Zoom and many organizations are following suit. The company gives people a false sense of security as it is widely used and very popular.
Of course, there are reasons to avoid other proprietary chat platforms too...
Agaric offers BigBlueButton for events and meetings. Check out our fun BBB website at CommunityBridge and test drive the video chat yourself!
If this discussion interests you, please share your thoughts with us in the comments.
Looking to learn more about problems with Zoom? There are a lot of articles about Zoom scandals.
Looking to learn more about protecting your privacy online? These links have some helpful information and videos for tech-savvy people and organic folks alike!
2021 could be the year we all begin to STOP supporting the Corporations that oppress us.
Special thanks to Keegan Rankin for edits!
Welcome to Drutopia! We hope you are enjoying the features we have built. Everything you are using is open-source and free for good people like yourself to use.
We invite you to give your input on the project and contribute where you can by becoming a member of Drutopia. For as little as $10 a year you can become a member who votes for our leadership team, suggests features for our roadmap and is part of a community building tools for the grassroots.
Learn more about membership at drutopia.org
Learn when we have new opportunities for learning (two to four announcements a year).
This month, the National Institute for Children's Health Quality is celebrating Agaric's support in making the most of the digital health revolution, part of their 20th anniversary campaign.
It got us thinking about how long we've been working in the space. Indeed, Agaric is proud to have been helping medical and scientific communities almost from our founding.
In 2008, we started building biomedical web communities enriched by semantic data. Working with researchers Sudeshna Das of Harvard University and Tim Clark of Massachusetts General Hospital, both affiliated with Harvard's Initiative in Innovative Computing, we were the primary software developers for the Scientific Collaboration Framework, a reusable platform for advanced, structured, online collaboration in biomedical research that leveraged reference ontologies for the biomedical domain. Making use of academic groups actively publishing controlled vocabularies and making data available in the Resource Description Framework (RDF) language, we built on work done by Stéphane Corlosquet, a lead developer in adding RDF to the Drupal content management system, to build the Science Collaboration Framework. SCF supported structured ‘Web 2.0’ style community discourse amongst researchers when that was a new thing, and made heterogeneous data resources available to the collaborating scientist, and captured the semantics of the relationship among the resources giving structure to the discourse around the resources.
Read more about it in Building biomedical web communities using a semantically aware content management system in Briefings in Bioinformatics from Oxford Academic.
Agaric led the work of building the website for an online community of Parkinson's disease researchers and research investors, on the Science Collaboration Framework, for the Michael J. Fox Foundation for Parkinson's Research.
In 2012, we worked with Partners In Health to create a site for people on the front lines of combatting tuberculosis to share and discuss approaches.
In 2015, we began contributing to the Platform for Collaborative and Experimental Ethnography. PECE is "a Free and Open Source (Drupal-based) digital platform that supports multi-sited, cross-scale ethnographic and historical research. PECE is built as a Drupal distribution to be improved and extended like any other Drupal project." We primarily worked on the integration of PECE's bibliography capabilities with Zotero's online collaborative bibliography services.
Also in 2015, we took on the exciting work of rebuilding the Collaboratory—a platform designed specifically to help improvement teams collaborate, innovate, and make change—for the National Institute for Children's Health Quality. We're proud to be NICHQ's 2020 partners in making the most of the digital health revolution.
All in all, we're impressed by our twelve years of building sites for the scientific and medical communities, and looking forward to helping shape a healthy future.
As it was for much of the world, 2018 was a combination of extremes for Agaric and the free and open web. Happily, we expanded our team, launched new sites, and empowered our clients through libre software. Unhappily, many of us and our communities endured health issues, political instability, and the effects of climate change.
For the open web, we disappointedly saw the United States officially end Net Neutrality while we excitedly watched the European Union begin enforcing comprehensive privacy laws with its General Data Protection Regulation. We were disgusted by tech giants like Facebook and Palantir diverting and deflecting from the abuses they carry out, but we were also inspired by workers at companies like Amazon and Google forcing their bosses to do better.
In looking back, we celebrate the victories and learn from the challenges—with our eyes set on serving our clients better, expanding the open web, and building an economy based on solidarity rather than exploitation.
To that end, here are the highlights of our work from last year and our intentions for the new year.
On November 13th and 14th in New York City, several hundred people gathered to talk about the problems of an online economy reliant on monopoly, extraction, and surveillance—and discuss how to build a "cooperative Internet, built of platforms owned and governed by the people who rely on them."
My experience at the Platform Cooperativism summit was Wow, everyone here really gets it and so many are doing awesome things; and then Hmm, there are still some really important differences to be worked out; and then We'll have to continue for months to figure out strategy for building fair platforms and we also need to restructure the whole economy.
In the sense technologists use it a platform is, like a physical platform, a technology that holds a lot of people up. It convenes people and gives them a chance to do something they wouldn't otherwise be able to do. Platforms can often be natural monopolies due to capturing the benefits of network effects (one person with a telephone is pointless, having nearly everyone available by telephone is incredibly valuable). Amazon and eBay are both platforms for sellers and buyers, Uber and Lyft for drivers and riders, Mechanical Turk and TaskRabbit for piece-workers and buyers of their work.
A cooperative is a jointly owned and democratically-controlled enterprise formed by people voluntarily uniting to meet their common needs and aspirations. Agaric is a small worker-owned cooperative, Mondragon is a very large group of integrated worker cooperatives, consumer cooperatives are businesses owned by their customers, credit unions are financial institutions owned by their members (with a one person, one vote governance), and producer cooperatives like CROPP Cooperative are formed by member businesses (which are not necessarily cooperatives themselves).
A platform cooperative, then, is a platform owned and controlled by the people directly affected by it. A company must be accountable, and as Omar Freilla put it, accountable means those impacted make the decisions.
This summit was a follow-up to the Digital Labor summit held one year before which detailed myriad ways centralized online platforms extract value from dispersed workers who have few options or bargaining power. Control of online platforms by the representatives of capital has or will have negative effects on workers, similar to exploitation in global manufacturing (think electronic devices and clothing), and negative effects on customers (think the massive money grab by oligopolies of fossil fuel and telecommunications corporations).
Agaric's Michele Metts told the Digital Labor summit organizers every chance she got that cooperatives and Free Software were the answer to exploited labor in the Internet economy, but something even more powerful than Micky's advocacy must have been at work: nearly every participant at Platform Cooperativism spoke of the need for workers to own the platforms that control their work, and people presenting on technology took for granted that source code and algorithms have to be open for democratic control to be meaningful. As Micky said on her panel, "You cannot build a platform for freedom on someone else's slavery."
The opening presentations made the case that platforms will exploit us unless we take control, and we moved on to discussing strategies for building platform businesses that are cooperatives of the people using the platforms. We also celebrated those already starting, like Loconomics, Fairmondo (in Germany), and Member's Media.
The biggest unsolved, but acknowledged, problem is getting the resources to build platforms that can compete with venture capital-funded platforms. Dmytri Kleiner made the claim that profit requires centralization, and, moreover, that centralization requires organizing along the lines of a profit-taking venture. How can people get the resources to build without both having to give up control and having to exploit people using the platform? Robin Chase reminded us that it costs millions of dollars, at least, to build a viable platform. Her solution is to continue to seek venture capital and work for some environmental or community goals while compromising on control.
A more popular possible solution is to replace centralized systems with decentralized ones, even to the point of replacing specific software with protocols, so the cost of building and operating platforms can be more widely shared, along with the benefits. However, as Astra Taylor summed up the widely felt point, decentralization does not always mean distributed power. Therefore control of technology decisions, and so democratic control of platforms, is more important than technology itself.
The potential positive role for government regulation was often mentioned, as Sarah Ann Lewis summarized the sentiment in a tweet: Platforms are not special snowflakes that must be exempt from regulation. If you can only succeed by exploitation you deserve to melt. Indeed, the centralized and surveillance nature of most platforms would make it much easier to ensure non-discrimination and fair wages.
More excitement came from the mention that local government has long played a role and can play a stronger part in democratic ownership of physical spaces. Several speakers urged people to get involved in local government, where harmful policies may be more the result of a lack of knowledge than of embedded corruption. Government can also get involved in mandating an open API for ride hailing services, which would remove the monopoly power from centralizing companies.
Hundreds of possible solutions faced lively questioning and debate, yet in all of this the titular solution, cooperative ownership, did not get the scrutiny it merits. Jessica Gordon Nembhard's Collective Courage has made me see that the connections and overlaps between worker cooperatives and other types of cooperatives are much more significant than I'd thought, but there are still differences. These differences, and the need to decide who exactly is democratically controlling a platform, were often not made clear by presenters, including some who are building platform cooperatives.
If Stocksy, for example, is owned by its photographers, can the workers who build the platform technology (rather than use it) play a part in democratic control? Co-founder and CEO Brianna Wettlaufer refers to it as a multi-stakeholder cooperative and it has been around since 2012 so they've surely worked it out, but this question is at the heart of how platform cooperatives must operate and it was hardly addressed at all.
The answer can be simple. The Black Star Coop brewery and restaurant in Austin, Texas, is owned by its customer-members while the workers manage it. The workers are internally a democracy, but there's no question they work for a businesses which is managed democratically by the customers. This makes even more sense for a quasi-monopoly platform: It's more important for, say, millions of people relying on a platform for livelihood or transportation or communication to own it than for the relatively small number of people who built it to own it.
This brings up another question that went largely unasked at the conference: does ownership mean anything when it's spread out among thousands or millions of people? Federated structures can mitigate this, but in general whoever controls communication among members effectively controls decisions. It may be possible to have horizontal mass communication by way of democratic moderation. At a small workshop I held at the conference, participants discussed ways collective control can be made real as democratic platforms scale—but that's a topic for another discussion.
The sense that displacing an app or website is easier than reconstructing global supply chains fueled a lot of the excitement at the conference. Notwithstanding, the need to restructure the rest of the economy so that it works to serve the needs of people, rather than sacrificing people's needs to the dictates of the economy, was never far from people's minds. Videos of most sessions are online and will certainly make you think about the opportunities for cooperative ownership of services and structures that define our lives, online and off.
We have already covered two of many ways to migrate images into Drupal. One example allows you to set the image subfields manually. The other example uses a process plugin that accomplishes the same result using plugin configuration options. Although valid ways to migrate images, these approaches have an important limitation. The files and images are not removed from the system upon rollback. In the previous blog post, we talked further about this topic. Today, we are going to perform an image migration that will clear after itself when it is rolled back. Note that in Drupal images are a special case of files. Even though the example will migrate images, the same approach can be used to import any type of file. This migration will also serve as the basis for explaining migration dependencies in the next blog post.
All the examples so far have been about creating nodes. The migrate API is a full ETL framework able to write to different destinations. In the case of Drupal, the target can be other content entities like files, users, taxonomy terms, comments, etc. Writing to content entities is straightforward. For example, to migrate into files, the process section is configured like this:
destination:
plugin: 'entity:file'
You use a plugin whose name is entity: followed by the machine name of your target entity. In this case file. Other possible values are user, taxonomy_term, and comment. Remember that each migration definition file can only write to one destination.
The source of a migration is independent of its destination. The following code snippet shows the source definition for the image migration example:
source:
constants:
SOURCE_DOMAIN: 'https://agaric.coop'
DRUPAL_FILE_DIRECTORY: 'public://portrait/'
plugin: embedded_data
data_rows:
- photo_id: 'P01'
photo_url: 'sites/default/files/2018-12/micky-cropped.jpg'
- photo_id: 'P02'
photo_url: ''
- photo_id: 'P03'
photo_url: 'sites/default/files/pictures/picture-94-1480090110.jpg'
- photo_id: 'P04'
photo_url: 'sites/default/files/2019-01/clayton-profile-medium.jpeg'
ids:
photo_id:
type: string
Note that the source contains relative paths to the images. Eventually, we will need an absolute path to them. Therefore, the SOURCE_DOMAIN constant is created to assemble the absolute path in the process pipeline. Also, note that one of the rows contains an empty photo_url. No file can be created without a proper URL. In the process section we will accommodate for this. An alternative could be to filter out invalid data in a source clean up operation before executing the migration.
Another important thing to note is that the row identifier photo_id is of type string. You need to explicitly tell the system the name and type of the identifiers you want to use. The configuration for this varies slightly from one source plugin to another. For the embedded_data plugin, you do it using the ids configuration key. It is possible to have more than one source column as identifier. For example, if the combination of two columns (e.g. name and date of birth) are required to uniquely identify each element (e.g. person) in the source.
You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD migration dependencies introduction whose machine name is ud_migrations_dependencies_intro. The migration to run is udm_dependencies_intro_image. Refer to this article to learn where the module should be placed.
The fields to map in the process section will depend on the target. For files and images, only one entity property is required: uri. Its value should be set to the file path within Drupal using stream wrappers. In this example, the public stream (public://) is used to store the images in a location that is publicly accessible by any visitor to the site. If the file was already in the system and we knew the path the whole process section for this migration could be reduced to two lines:
process:
uri: source_column_file_uri
That is rarely the case though. Fortunately, there are many process plugins that allow you to transform the available data. When combined with constants and pseudofields, you can come up with creative solutions to produce the format expected by your destination.
The source for this migration contains one record that lacks the URL to the photo. No image can be imported without a valid path. Let’s accommodate for this. In the same step, a pseudofield will be created to extract the name of the file out of its path.
psf_destination_filename:
- plugin: callback
callable: basename
source: photo_url
- plugin: skip_on_empty
method: row
message: 'Cannot import empty image filename.'
The psf_destination_filename pseudofield uses the callback plugin to derive the filename from the relative path to the image. This is accomplished using the basename PHP function. Also, taking advantage of plugin chaining, the system is instructed to skip process the row if no filename could be obtained. For example, because an empty source value was provided. This is done by the skip_on_empty which is also configured log a message to indicate what happened. In this case, the message is hardcoded. You can make it dynamic to include the ID of the row that was skipped using other process plugins. This is left as an exercise to the curious reader. Feel free to share your answer in the comments below.
Tip: To read the messages log during any migration, execute the following Drush command: drush migrate:messages [migration-id].
The next step is to create the location where the file is going to be saved in the system. For this, the psf_destination_full_path pseudofield is used to concatenate the value of a constant defined in the source and the file named obtained in the previous step. As explained before, order is important when using pseudofields as part of the migrate process pipeline. The following snippet shows how to do it:
psf_destination_full_path:
- plugin: concat
source:
- constants/DRUPAL_FILE_DIRECTORY
- '@psf_destination_filename'
- plugin: urlencode
The end result of this operation would be something like public://portrait/micky-cropped.jpg. The URI specifies that the image should be stored inside a portrait subdirectory inside Drupal’s public file system. Copying files to specific subdirectories is not required, but it helps with file organizations. Also, some hosting providers might impose limitations on the number of files per directory. Specifying subdirectories for your file migrations is a recommended practice.
Also note that after the URI is created, it gets encoded using the urlencode plugin. This will replace special characters to an equivalent string literal. For example, é and ç will be converted to %C3%A9 and %C3%A7 respectively. Space characters will be changed to %20. The end result is an equivalent URI that can be used inside Drupal, as part of an email, or via another medium. Always encode any URI when working with Drupal migrations.
The next step is to create assemble an absolute path for the source image. For this, you concatenate the domain stored in a source constant and the image relative path stored in a source column. The following snippet shows how to do it:
psf_source_image_path:
- plugin: concat
delimiter: '/'
source:
- constants/SOURCE_DOMAIN
- photo_url
- plugin: urlencode
The end result of this operation will be something like https://agaric.coop/sites/default/files/2018-12/micky-cropped.jpg. Note that the concat and urlencode plugins are used just like in the previous step. A subtle difference is that a delimiter is specifying in the concatenation step. This is because, contrary to the DRUPAL_FILE_DIRECTORY constant, the SOURCE_DOMAIN constant does not end with a slash (/). This was done intentionally to highlight two things. First, it is important to understand your source data. Second, you can transform it as needed by using various process plugins.
Only two tasks remain to complete this image migration: download the image and assign the uri property of the file entity. Luckily, both steps can be accomplished at the same time using the file_copy plugin. The following snippet shows how to do it:
uri:
plugin: file_copy
source:
- '@psf_source_image_path'
- '@psf_destination_full_path'
file_exists: 'rename'
move: FALSE
The source configuration of file_copy plugin expects an array of two values: the URI to copy the file from and the URI to copy the file to. Optionally, you can specify what happens if a file with the same name exists in the destination directory. In this case, we are instructing the system to rename the file to prevent name clashes. The way this is done is appending the string _X to the filename and before the file extension. The X is a number starting with zero (0) that keeps incrementing until the filename is unique. The move flag is also optional. If set to TRUE it tells the system that the file should be moved instead of copied. As you can guess, Drupal does not have access to the file system in the remote server. The configuration option is shown for completeness, but does not have any effect in this example.
In addition to downloading the image and place it inside Drupal’s file system, the file_copy also returns the destination URI. That is why this plugin can be used to assign the uri destination property. And that’s it, you have successfully imported images into Drupal! Clever use of the process pipeline, isn’t it? ;-)
One important thing to note is an image’s alternative text, title, width, and height are not associated with the file entity. That information is actually stored in a field of type image. This will be illustrated in the next article. To reiterate, the same approach to migrate images can be used to migrate any file type.
Technical note: The file entity contains other properties you can write to. For a list of available options check the baseFieldDefinitions() method of the File class defining the entity. Note that more properties can be available up in the class hierarchy. Also, this entity does not have multiple bundles like the node entity does.
What did you learn in today’s blog post? Had you created file migrations before? If so, had you followed a different approach? Did you know that you can do complex data transformations using process plugins? Did you know you can skip the processing of a row if the required data is not available? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with your colleagues.
Next: Introduction to migration dependencies in Drupal
This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.
Sign up if you want to know when Mauricio and Agaric give a migration training:
Last week we were asked to disable the "threads" in the comments and display them as a plain list in one of our projects, so checking in the Comment field settings we found an option which says:
"Threading: Show comment replies in a threaded list."
Which basically display the comments as a threaded instead of a list of comments. This option is marked on by default, so it seemed that it was just a matter of unchecking this option, but there was a problem with this, that setting did display the comments as a plain list, but in the code they were still being saved as a thread.
The main problem with this is if a comment is deleted and it has replies (even if the option of the thread is unchecked) all the replies are going to be deleted as well.
This is not a new problem there is an 11 year old issue and it seems that this has been happening since Drupal 4. The way to fix this so far was by using a contrib module called: Flat Comments The problem is that this module hasn't been ported to D8 yet.
We decided to fix that and port the module to help others with this problem. You can check the code here: https://github.com/agaric/flat_comments and we are in contact with the module's maintainer to create the D8 branch in drupal.org soon.
Expecting that this can help someone else.
Sign up to be notified when Agaric gives a migration training: