China’s Drafted Algorithmic Recommendation Technology Provisions

China is well on its way to reaching 1 billion internet users, meaning that the country houses around 20% of all people on the internet to date. Alongside this, the PIPL was implemented on 1st November 2021. With such high usage, there usually comes more stringent regulations to protect users from the risks associated with aspects of the internet which could be of harm to them. The Chinese authorities have noted that the use of algorithms could lead to harm within users, especially those that already face issues with addictions associated with gambling and more. They also aim to tackle the spread of misinformation and a lack of user autonomy. Samuel Adams wrote on this for IAPP, the full article can be found here.

What are the dangers of algorithms? 

The Chinese authorities have released a collection of 30 articles associated with the use of algorithm recommendations. These cover the likes of algorithms, search filters, personalised recommendations, information sharing services, users’ rights and more. Regulating these aspects of internet usage will inherently protect users from unnecessary artificial intelligence (AI) algorithms and the possible unpredictability of their recommendations. The provisions will also include regulations surrounding generated or synthetic personalised recommendations, rankings, selections, and decision making. This predominantly affects any parties that use or provide content material for user interaction. A prime example of this is social media. 

 

Social media platforms use algorithms to track everything that a user watches, clicks on, likes, or generally interacts with on their platform. With this information, the AI embedded in the algorithm will find similar content and show it to the user in order to keep them invested in using the platform, software, or application. This allows a user to feel as though the content they receive on the platform is catered to them, creating a more personalised experience. AI learns and grows as it processes more and more information, making it malleable to each person. However, due to its complexity and lack of human moral compass, there is no easy way to filter out what might be harmful to an individual. A docudrama that lays out how these algorithms work is The Social Dilemma, which you can read our thoughts on here

 

The problem with algorithms is that they do not have a moral compass. For example, say a user were to like a picture which presented radical or harmful acts. The AI embedded within the algorithm would do what it is programmed to do and show more of this to the user. As humans are highly influenced beings, there is no telling whether that person would be encouraged or influenced to participate in the extreme/harmful actions that the content is bringing to their attention. This can be said for addiction as well. 

 

What do the articles lay out? 

As mentioned before, the new provisions draft out 30 articles, all covering algorithmic recommendation requirements for organisations. We thought it would be helpful to highlight the key provisions that have been drafted for the implementation of the PIPL: 

  • Article 6: Use of algorithmic recommendation services to engage in activities harming national security, upsetting the economic order and social order, infringing the lawful rights and interests of other persons, and other such acts prohibited by laws and administrative regulations should not be used.
  • Article 7: Have a responsibility for algorithmic security, establish and complete management systems for user registration, information dissemination examination and verification, algorithmic mechanism examination and verification, security assessment and monitoring, security incident response and handling, data security protection and personal information protection. 
  • Article 8: Regular examination, verification and assessment of algorithmic mechanisms, models, data, and application outcomes, algorithms that reject positive outcomes such as addiction or high-value consumption are prohibited. 

For the latest trends in Data Privacy and Cyber Security in Education, read our Education Privacy and Technology Magazine.

Read it Here!

 

If algorithms are used in social media, then why are they important for schools?

Well, not only are algorithmic recommendations made on social media platforms, but they are also embedded within some online education software too. Algorithms can be used to analyse the learning behaviours of students, thus giving insight into how they learn, and allowing teachers to give a more catered learning experience to them. However, it is important to consider the risks associated with using software and services that have algorithms embedded into them. 

 

Algorithms on learning platforms or services consider video watch time, the time taken to complete the test, and performance on tests. This can help decipher the best form of education for that child, through understanding whether they are self-motivated, or task orientated. This is a positive output of using algorithms. However, there is a chance that children could be favoured over others and treated differently due to their learning habits, and success rates. This creates an unhealthy and unfair learning environment for students, putting an imbalance of power between the students and the teacher. Understanding how to advance students’ learning in this way has never been accessible before now, but the negative effects should always be considered and accounted for. 

 

Using algorithms should be fair and explainable to those that it impacts, have a positive impact on teaching and learning; and should use data that is not biased towards or against any particular group of people. This means that only personal information should be shared with external vendors that use algorithmic recommendations where possible. Any sensitive data that could result in unfair judgment (such as ethnicity, financial status, or health records) should be shared with these vendors in a cautious manner. With this, there is a need to assess and prequalify the safety of these types of software when implementing them into schools. 

 

How can I prequalify software that uses algorithmic recommendations before using it in school?

A vendor assessment will help your school to prequalify whether vendors will adequately protect the data and the rights of your students. This means that, before implementing exam proctoring software, you will be able to visualise the risks that are posed against students’ data and decide whether the benefits of using the software outweigh the risks. With vendor assessments, your school will also be able to visualise whether software that uses algorithmic recommendations uphold the regulations laid out in the new privacy laws.

 

9ine’s vendor assessment tool on our App not only prequalifies the vendors that you would like to use, but also allows you to document the risks that you find and the decisions that you make. This can then be integrated with Privacy Impact Assessments and data mapping to ensure that you are able to evidence the choices that you have made, and why you have made them. This comes in handy when asked to present why you chose to implement software should a data incident occur. 

 

If you would like to learn more about how 9ine can help your school with vendor assessments and software that uses algorithmic recommendations; or you would like help with your PIPL compliance programme, you can do so by talking to one of our team.

Book a Consultation

Let’s Stay in Touch

Subscribe to our newsletter to receive product announcements & other updates.

footer-illustration