The School’s Use Of Laptops May Be More Harmful Than Beneficial

More public school students use laptops, tablets or similar devices in school since the start of the Pandemic:

A September 2022 report shows that the percentage of teachers who reported their schools provided their students with such devices doubled during the pandemic.

It might be tempting to celebrate how schools are doing more to keep their students connected during the Pandemic.

The problem is that schools aren’t just giving kids computers to use to keep up with schoolwork. laptops. In a trend that could easily be described as Orwellian.

The vast majority of schools are also using those devices to keep tabs on what their students are doing in their personal lives.

80% of teachers and 77% of high school students reported that their school had installed artificial intelligence-based surveillance software on these devices to monitor their online activities and what is stored in the computer.

In cities and school communities throughout the United States, laptops. students are being monitored.

In the Minneapolis school district, school officials paid over $355,000 to use tools provided by Gaggle until the year 2023.

Out of all the incidents reported, laptops. three-quarters took place outside of school hours.

Baltimore schools use the GoGuardian surveillance app to monitor the behavior of their students. Police are notified when the app detects keywords that suggest suicidal tendencies.

Safety versus privacy

Vendors claim the tools keep students safe from online harms. Privacy groups have raised questions about those claims.

Vendors don’t reveal how their artificial intelligence programs are trained or the data used to train them.

Privacy advocates are concerned that these tools may harm students by criminalizing mental health problems.

I am a researcher who studies privacy and security issues in various settings and I know that intrusive surveillance techniques cause emotional and psychological harm to students.

Disproportionately penalize minority students and weaken online security.

Artificial intelligence not intelligent enough

The most advanced artificial intelligence doesn’t have the ability to comprehend human language or context. Student surveillance systems pick up a lot of false positives, because of this.

In some cases, these programs have flagged students talking about the novel “To Kill a Mockingbird.

Harm to students

Students are less likely to share true thoughts online and are more careful about what they search when they know that they are being monitored.

vulnerable groups, such as students with mental health issues, laptops. can be discouraged from getting needed services.

Students with a high level of self-confidence are less likely to develop into adults with a low level of self-confidence when they know that their every move is watched.

Students have a negative impact on their ability to act and use analytical reasoning when they are being surveilled.

It makes it difficult for them to develop the skills and mindset needed to exercise their rights.

More adverse impact on minorities

The United States. Minority students are disproportionately disciplined. laptops. African American students are more likely to be suspended than white students.

Vendors report any concerns to the school officials on a case-by-case basis after evaluating flagged content. The lack of oversight in the use of these tools could lead to further harm for minority students.

The situation has worsened due to the fact that Black and Hispanic students use more school devices than their white peers.

Minority students are more likely to be monitored and exposed to greater risk of intervention, laptops. as a result of this.

Artificial intelligence programs often fail to account for racial bias when trying to decide who to penalize. This leads to both black students and white students being penalized.

It is not a good idea to write in languages other than your own unless you’re fluent in them, and even then, you should only use it occasionally.

Machine learning algorithms are only as good as their training data and the people that built them.

And the people working in this field are largely white, male, and from privileged backgrounds. That’s why we should care about bias in machine learning.

Leading artificial intelligence models are more likely to flag offensive comments made by African Americans on their social media accounts.

They are 2.2 times more likely to flag a message written in a language other than English.

Sexual and gender minorities are more negatively affected by these tools. According to reports, Gaggle flagged “gay,” “lesbian” and other LGBTQ-related terms because they are associated with pornography.

Even though the terms are often used to describe one’s identity. Gaggle monitors the language to make sure it isn’t used to bully people.

Increased security risk

Students are at increased risk of cyberattacks due to these surveillance systems.

Students are required to install a set of certificates known as root certificates if they wish to comprehensively monitor their activities.

As the highest-level security certificate installed in a device, a root certificate is used as a master certificate to determine the entire system’s security.

Cybersecurity checks that are built into these devices can be compromised by these certificates.

You are curious about the world and you are smart. The authors and editors of The Conversation are also included.

If you subscribe to our newsletter, you can read us on a daily basis.

Gaggle scans digital files of more than 5 million students each year.

Some schools contract with a vendor like Contentkeeper to install a root certificate on their students computers, in addition to working with Gaggle.

This tactic of installing certificates is similar to the approach that authoritarian regimes use to monitor and control their citizens.

As well as the approach that cybercriminals use to lure victims to infected websites.

Second, the vendors of the surveillance systems use vulnerable systems that can be exploited.

Netop’s Vision Pro Education software was found to have several vulnerabilities by the computer security software company.

Netop didn’t block unauthorized access to the communications between teachers and students.

Over 9000 schools used the software to monitor millions of students. The vulnerability made it easy for hackers to gain control of the computers.

Personal information of students that is stored by the vendors is vulnerable to being broken into.

In July 2020, criminals stole 444,000 students’ personal data, including names, email addresses, home addresses, phone numbers and passwords, from online proctoring service ProctorU.

The data was then uploaded to the internet:

It would be beneficial for schools to look more closely at the harm being caused by their monitoring of students and to question whether or not they actually make students more safe.

The story has been updated to correctly identify the company that installs root certificates and to add a statement from Gaggle.

Abdullah
Abdullah
Articles: 33

Leave a Reply

Your email address will not be published. Required fields are marked *