AI Directory Community Events Projects Join Us

Newsletter

Subscribe to our newsletter and we'll keep you updated with the latest news and opportunities.

Recent News

AI for education project ending in the Horizon 2020 programme, but work still continues

The project has developed a commoditized set of tools and systems that enable the ingestion of OER material into the X5GON registry including semantic cross-lingual indexing of materials, automatic transcription and translation of recordings, assessment of how engaging the material is, and potentially how it might sequence with other OERs.

Further, methods for automatically estimating the knowledge of users based on their track record of viewing different OERs enables the system to recommend content that is likely to engage and prove useful for learners and teachers.

X5GON ending in the Horizon 2020 programme, but work still continues
X5GON ending in the Horizon 2020 programme, but work still continues

For example, a moodle plug-in can provide such recommendations at the level of a particular course, while the X5learn system can make recommendations to individual learners based on their earlier viewing experience.

The project has actively engaged with OER sites and developed systems to assist with the incorporation of OERs into the X5GON registry significantly growing the number of sites and materials that are indexed by the X5gon tools.

Wikimedia Foundation Research Award of the Year

Work partially funded by K4A has won the inaugural 2021 Wikimedia Foundation Research Award of the Year with the paper “Participatory Research for Low-resourced Machine Translation: A Case Study in African Languages” and the Masakhane Community

This paper and the Masakhane community have attempted to fundamentally change how we approach the challenge of “low-resourced languages” in Africa via a set of projects funded by K4A, with the support of UNESCO, IDRC, and GIZ. The research describes a novel approach for participatory research around machine translation for African languages. The authors show how this approach can overcome the challenges these languages face to join the Web and some of the technologies other languages benefit from today.

The work of the authors and the community is an inspiring example of work towards Knowledge Equity, one of the two main pillars of the 2030 Wikimedia Movement Strategy. “As a social movement, we will focus our efforts on the knowledge and communities that have been left out by structures of power and privilege. We will welcome people from every background to build strong and diverse communities. We will break down the social, political, and technical barriers preventing people from accessing and contributing to free knowledge.”

We cannot think of a better or more inspiring example of a project we have been involved in the last couple of years.

Knowledge 4 All Foundation supports IRCAI Launch

On March 29 and 30 2021, the IRCAI launch event took place. 1083 registered participants from 123 countries attended and were addressed by esteemed speakers on the first day of the event. Participants came from all geographical regions of United Nations: African, Asian-Pacific, Eastern European, Latin American and Caribbean and Western European states. Non-registered participants were also invited to watch the event via live streaming on YouTube. The launch was created with input from 33 active speakers and panelists.

In his speech, the President of the Republic of Slovenia, Mr. Borut Pahor, emphasized that the establishment of IRCAI in Ljubljana is a great recognition for Slovenian researchers and the Jožef Stefan Institute who have been working on artificial intelligence in Slovenia for several decades. According to President Pahor, artificial intelligence is a tool for a better life and offers great opportunities “for progress, for more accessible and efficient public services, quality education and better access to information, and helps us fight climate change, introduce new forms of mobility and use energy more efficiently.”

Read full report here.

Participating Countries

The Director-General of the United Nations Educational, Scientific and Cultural Organization (UNESCO), Ms Audrey Azoulay, who attended the event live from Paris, regretted that she could not be there live as originally planned and welcomed IRCAI to the UNESCO family. “IRCAI has become a space that directs academic and human resources to research topics within the mandate of UNESCO, which, as you know, includes education, culture, science and information,” adding that despite the large number of UNESCO centers, none yet deals with artificial intelligence. “Thanks to IRCAI, we now have the support of an entire team that is directing its diverse skills to ensure that artificial intelligence is used in a way that serves the common good. We are fortunate to have an ally like this to help make our ambitions reality,” she added, explaining the important role IRCAI played in drafting the UNESCO Recommendation on Ethical Artificial Intelligence and personally thanking the team for their efforts in leading the regional consultation on the draft recommendation. “We have already had a glimpse of the potential of this partnership. This inauguration is therefore very promising,” she concluded.

Number of Participants by Country

The Minister of Education, Science and Sport of Slovenia,Prof Simona Kustec stressed the importance of cooperation in creating opportunities to address current challenges, including through artificial intelligence, and called on all participants to work together. The Minister of Public Administration of Slovenia, Mr Boštjan Koritnik stressed that “Slovenia aims for a high quality and ethical use of artificial intelligence that citizens can trust” and emphasized that artificial intelligence will be one of the main priorities during the Slovenian EU Presidency.

The development of artificial intelligence in Slovenia was also highlighted by prof. Boštjan Zalar, Director of the Jožef Stefan Institute, who stressed that the Institute has a 40-year history in the development of artificial intelligence, over 70 major projects in various departments of the Institute and that in his opinion IRCAI can further strengthen these achievements.

Number of Participants by Country

Support for IRCAI was also expressed by the representative of European Commission with which IRCAI has many strategic synergies. Anthony Whelan , Digital Policy Adviser from the cabinet of European Commission President Ursula von der Leyen noted, “It is indeed a nice coincidence that the Slovenian Presidency is preparing to work with such an excellent asset at its doorstep, and we hope that this will also serve as a flagship for international efforts.“

The sequence of events leading to the establishment of IRCAI and the results of the Center’s work so far were presented by its Director, Prof. John Shawe-Taylor. “IRCAI has already established active cooperation with a wide range of international organizations, which it intends to further strengthen and expand,” he said in his speech. Among other things, he called for active participation through projects listed on the Center’s website.

Number of Participants by Projects

On the first day, a panel discussion, which included several speakers from African countries, focused on building a global artificial intelligence community. The second day of the event focused on presentations of the results of IRCAI activities, opportunities for collaboration, and the use of artificial intelligence tools to support the achievement of Sustainable Development Goals. Presentations were given by IRCAI Program Committee representatives Aidan O’Sullivan, Colin de La Higuera, Catherine Holloway and Delmiro Fernandez-Reyes.

Analyzes of 6 Regional Consultations on UNESCO recommendation on AIethics and IRCAI ethics andregulatory approaches were presented alongside panel discussions on the issues of the need for policy action on AI. IRCAI Funding and Innovation Program: Social Impact Bonds, AI policies around the world and AI Global Observatory were also presented by IRCAImember organizations Daniel Miodovnik, Mark Minevich and Marko Grobelnik respectively. The presentations included 5 reports co-authored by IRCAI representatives: Artificial Intelligence in Sub-Saharan Africa, Artificial Intelligence Needs Assessment Survey in Africa, UNESCO Ethics of AI Recommendation Regional Consultations, Opinion Series Reports: UNESCO Ethics of AI Recommendation Regional Consultations, Responsible Artificial Intelligence in Sub-Saharan Africa and Powering Inclusion: Artificial Intelligence and Assistive Technology.

A call for collaboration has also been launched to join IRCAI, which is actively working on 10 projects to be implemented by 2021. These are all designed to scale and deploy AI to achieve the Global Challenges that the Center has set out to achieve. IRCAI is seeking partnerships with, International Organizations, governments, companies, NGOs, universities, research institutes, AI consortia and government agencies around the world to implement these projects.

AI4D blog series: A Study Towards Automated Wildlife Patrol

The aim of our project is to investigate the technological feasibility of deploying Unmanned Ground Vehicles for automated wildlife patrol, as well as performing a preliminary analysis of other metadata collected from officials at a national park in Kenya. To this end, we seek to collect and publish a dataset of driving data across national park trails in Kenya, the first of its kind, and use deep learning to predict steering wheel angle when driving on these trails.

Setting up the data acquisition system

The data collection required a vehicle mounted with a camera to be driven across national park trails while recording the trail video as well as key driving signals such as steering wheel angle, speed and brake and accelerator pedal positions. We began design, installation and configuration of the data collection system in November and December 2019.

The first idea was to procure and attach sensors to the vehicle to obtain these driving signals. But upon further research, it was discovered that most of these driving signals can be read from the CAN bus which is exposed on the OBD-II (On-Board Diagnostics) port on most vehicles manufactured after 2008.

This information however is grouped and encoded within different parameter ids, and it requires reverse engineering to identify each of these driving parameters which is significantly time consuming, an activity that would take months by itself.

Furthermore, not all of the driving signals would be exposed on the CAN bus. The parameters exposed on the bus vary between vehicle manufacturers and models, and so does the encoding. After failing to understand the data read from the CAN bus of our personal vehicles, we decided to find a vehicle model which had already been reverse-engineered.

We were able to identify [1] and procure a Toyota Prius 2012 for the data collection, from which we could read the steering wheel angle, steering wheel torque, vehicle speed, individual wheel speeds and brake and accelerator pedal positions. We used a Raspberry Pi 3 microcomputer with the PiCan hat to read and log the driving signals.

Encoded driving data seen on the vehicle’s CAN bus
Encoded driving data seen on the vehicle’s CAN bus

In order to create the dataset for training and testing the learning algorithm, each data sample would have to contain a video frame matched to the corresponding driving signals at that instance. That means all the video frames, as well as the driving signals, have to be timestamped.

The driving signals are automatically timestamped during logging on the Raspberry Pi, but most cameras don’t timestamp the individual frames. Further, the internal clock of the camera would not be in sync with that of the RPi’s, and would cause the video frames and driving signals to also be out of sync when creating the data samples.

That means a camera that could interface to the computer as a webcam would be needed, so each frame can be read and timestamped before being written to the video file. Driving on rough national park trails would also induce a lot of vibrations and require a camera with good stabilization. These were some of the challenges in selecting a camera for recording the driving video.

Check the project documentation on Github

We settled on the Apeman A80 action camera which has gyro stabilization, HD video recording and can also function as a webcam. OpenCV was used to read and record timestamped video to the computer.

Initially, we tried to connect the camera to the Raspberry Pi itself. But the RPi is a low-powered microcomputer. There was significant lag in recording and could not write the video higher than a frame rate of 8fps. We therefore decided to use a laptop which could comfortably record HD
video at 30fps to connect to the camera, and the RPi for only logging the driving signals from the vehicle’s CAN bus.

This however presented a different challenge of being limited by the laptop battery. While the RPi can be charged using a portable power bank or directly from the car’s charging port, the laptop cannot. That meant significantly shorter data collection runs. We could only drive around continuously for 2 hours before we had to return to charge the laptop which took another 2 hours.

This forced revising down our overall data collection projections from 50 hours to 20 hours, of which 25 hours which was to be on the national park trails was revised down to 10 hours, and the other 10 hours on a mixture of tarmac roads and other rural dirt roads.

There was also extensive testing of different video encoding methods to determine the best filesize versus quality tradeoff, as well as data collection code optimization to ensure minimum lag during the data logging.

Data collection

We began the data collection in January 2020 on tarmac and rural dirt roads. The idea behind this was to train the algorithm on a simpler dataset and then use transfer learning for better faster results on the national park trails. The data was collected at various times of the day: early in the morning, noon and late in the evening in order to get a varied dataset in different lighting conditions.

While we were able to smoothly collect the data on tarmac roads, driving over the rural dirt roads proved impossible as they were marked with potholes. Not only was it challenging to drive a low-body vehicle over the rough terrain, but the constant maneuvers made to go around the potholes meant that most of that data would be unusable as it would present a different challenge altogether in training.

The challenge of driving a low-body vehicle on dirt roads also limited our choices of national parks, as we had to carefully select ones with smooth driving trails. Our plan to collect data from the Maasai Mara National Reserve had to be abandoned due to the bad road conditions there, and we opted to collect data from Nairobi National Park (8 hrs) and Ruma National Park (2.5 hrs) instead. Even these however were not without their setbacks involving a flat tire and bumper damage.

Another challenge faced in the parks was internet connectivity. While a stable internet connection was not needed for the data collection which was done offline, a connection to the internet was needed when starting up the Raspberry Pi to allow it to initialize the correct datetime value.

This is because the RPi microcomputer does not have an internal clock. That means unless it has a connection to the internet, it will resume the clock from the last saved time before it was shut down, hence ending up showing the wrong time. That resulted in incorrect timestamps on the logged driving data that could not be matched to the video timestamps.

This was observed while analyzing the driving data logs from one of the runs at Ruma National Park. Luckily, internet connectivity was regained towards the end of the run and the rest of the timestamps could be calculated correctly using the message baud rates.

Other minor issues faced in obtaining good quality data involved keeping the windshield clean while driving on dusty park trails where one is not allowed to alight from the vehicle, and securely mounting the camera inside the vehicle while driving over rough terrain.

Dataset preparation and Training

A significant portion of the data collected included driving around potholes, overtaking, stopping, U-turns etc. which would not be useful for predicting the steering wheel angle within the scope of this study. All these segments had to be visually identified and removed before
preparing the dataset.

Initially, we proposed to use a simple Convolutional Neural Network (CNN) model for training as in [2], where the steering wheel angle is predicted independently on each video frame as the input. However, the steering angle is also largely dependent on the speed of the vehicle. Driving
is also a stateful process, where the current steering wheel angle is also dependent on the previous wheel position.

We therefore investigated the use of a more sophisticated temporal CNN model as in [3] using recurrent units such as LSTM and Conv-LSTM that could give more promising results. The above model however is very computationally expensive and would require a cluster of very expensive GPUs and still take days to train.

Using this model proved impossible to achieve within the given timeline and budget. We therefore decided to continue with our initial proposal using a static CNN model [2].

Currently we are in the process of building the dataset and learning model for the project. We are also working on preparing a preliminary analysis on the feasibility of automated wildlife patrol [4] based on other metadata collected from park officials.

We are grateful for the immense support that we always get from our mentor Billy Okal who in spite of his busy  schedule, gets the time to set up calls whenever we need to consult and always comes up with great ideas that address most of our concerns.

References

[1] C. Miller and C. Valasek, Adventures in Automotive Networks and Control Units, IOActive
Inc., 2014, pp. 92-97.
[2] M. Bojarski et al., End to end learning for self-driving cars, 2016, arXiv:1604.07316.
[3] L. Chi and Y. Mu, Deep steering: Learning end-to-end driving model from spatial and
temporal visual cues, 2017, arXiv:1708.03798.
[4] L. Aksoy et al., Operational Feasibility Study of Autonomous Vehicles, Turkey International
Logistics and Supply Chain Congress, 2016.

Reposted within the project “Network of Excellence in Artificial Intelligence for Development (AI4D) in sub-Saharan Africa” #UnitedNations #artificialintelligence #SDG #UNESCO #videolectures #AI4DNetwork #AI4Dev #AI4D

News Archive