Our model is similar to many traditional business models. We create added value by processing resources (personal and behavioral data) to deliver a service (analytics) to our customers and we pay for the resources used (share the revenues with our users). Yet, because people have been accustomed to give their data to tech giants virtually for free, the concept of receiving payment for data seems somewhat alien.
Of course, collecting personal data doesn’t come without costs to tech giants. They need to build products and deliver “free to use services” to serve as a context for their data collection. But what does it really mean to use a “free service” in exchange for “free data”? Is the “free data” valueless? Tech giants profits prove the exact opposite. Is the “free service” valueless? Who can say? I mean, how many messaging apps do we actually need? What is certain is that while “free data” is resellable, “free services” are not. In short, economic systems that rely on bartering are really bad at determining the correct value of things. That’s why people invented money and bartering became obsolete. This is one of the reasons we choose to pay for data. People need to know the value of their data and need to be able to freely trade it or any ensuing usage right.
Furthermore, if the cost of data is determined solely by the technological costs of collecting the data, while ignoring the costs incurred by the people producing the data, then the data is obviously undervalued. Since the cost of securing an asset tends to be proportional to its value, it follows that security measures that ensure data privacy are undersized. This is one of the reasons that leads us to believe that true data privacy and effective ethical data use cannot exist while ignoring the cost of producing the data.
To ensure that a data processing model is ethical, it’s core principles must be privacy, anonymity and transparency of access to data. Person identification data - such as name, address of residency, social security number, biometrics, email address or phone number - should never be linked to data about traits and behavior, and should preferably not be stored if not needed. Personal data should never be sold for profit. To further ensure privacy and anonymity, analytics on individual user profiles should not be permitted. Beyond any corporate claims of ethical behavior, the only way that people can trust any data trading or processing system is if the system allows people complete control over their data and complete transparency of access to it. If we were to apply all the above principles to any tech giant, their users would be able to effectively link their data to corporate profits, which could severely impact their business.
The European Union has established an indirect cost associated with data privacy through the use of financial penalties under the General Data Protection Regulation. Still, this has had little to no impact on effective data privacy, as misuse of data is impossible to monitor. Meanwhile in the US and other parts of the world, companies are still selling end-user data without their knowledge or consent.
The European Union has also adopted the Copyright Directive which states that when authors license or transfer their exclusive rights for the exploitation of their works or other subject matter, they are entitled to receive appropriate and proportionate remuneration. The same Directive states that authors must receive on a regular basis up to date, relevant and comprehensive information on the exploitation of their works from the parties to whom they have licensed or transferred their rights, in particular with regards to modes of exploitation, all revenues generated and remuneration due.
So why doesn’t copyright apply to behavioral data? Why doesn’t the author of a JSON file containing behavioral data have the same rights to protection, remuneration and transparency of usage as the author of a JPEG file does? Well, it mainly has to do with the fact that most of the thinking around copyright and related regulatory frameworks stem from the Berne Convention for the Protection of Literary and Artistic Works of 1886 - a time when JSON and JPEG wouldn’t have made much sense.
At Dimely, we view each analytics report that we sell as an on-demand collective work created by using individual works of intentional behavior by human authors. Consequently, we pay our users royalties proportional to the amount of data they contribute and to the number of reports sold.
Data royalties incentivize data authenticity, privacy and security, bringing more value to companies and customers alike. Additionally, this type of revenue can prove to be an initial stage of basic income or a small part of a more complex solution to economic inequality. One example that Dimely employs is allowing users to invest their royalties through crowdfunding and bringing more value and prosperity to themselves and to society.
Sometimes there are simple elegant answers to a lot of complex problems. But most times we don’t see the wood for the trees. We’re so used to being digitally enhanced by our devices that we become ignorant of their potential. As people are more and more connected through their mobile or wearable devices, more and more data is generated, collected and used by algorithms and robots to automate economic activities, making data production associated with human experience a basic economic activity for people.