So I have been involved in an AI project since a while professionally (see https://toolkit-digitalisierung.de/en/fair-forward/). Also, this website is all about how to understand and how to foster knowledge commons and open knowledge peer production for human development. Therefore, I would like to present you here some thoughts on how we might be able to democratize Artificial Intelligence (AI) globally through new commons, a concrete initiative that does this and a new alliance on the issue.
Let’s start with some quick background: It becomes more and more clear that most modern machine learning (aka AI) approaches rely on massive amounts of so-called training data. Such training data are not available as a knowledge commons most of the time, and for most people worldwide. A good example is spoken language, as this article from GIZ explains, from which I partly borrow in the next paragraphs:
Language-based AI can be used to share information in a targeted, personalised way and reach people who cannot read – e.g. through interactive voice assistance. But there’s a problem. AI can only work when it is ‘fed’ and trained with data. Suitable language data from African and Asian nations has so far been a scarce resource. Currently, the data is predominately gathered and used by big companies like Google and Amazon. Local languages in Africa and Asia are commercially less interesting and/or more complex and therefore are often neglected – at the same time they promise to yield high societal benefits. A classical case for a knowledge-commons approach.Continue reading