AI vs Digital Ethics – can the two actually co-exist?
Apart from laws, ethics are what keeps our society functioning well, so that people can live peacefully together.
We live in time when the exchange of knowledge and information is reliant on technology. Effectively it is an era of “Digital Citizenship”. But what does this actually entail? Being a digital citizen means being a person with various skills and knowledge regarding the use of the internet via different electronic gadgets (like tablets and smart phones) and participating on social network platforms (such as Facebook, Twitter, Instagram, and Line). A person with such skills should apply their abilities for the good of society.
How to be a good “digital citizen”
- Be aware of the IT accessibility of other people. A good digital citizen should not discriminate against or look down on people who lack technology skills.
- Be a good seller and a good consumer. Digital citizens must obey the law in online activity and ensure transactions are both trustworthy and ethical. For example, they should not deal with illegal transactions, should not download illegal content, etc.
- Have good manners. A good digital citizen should have proper Digital Etiquette and be responsible for their online actions.
- Respect laws and regulations. Financial transactions come under the electronic transaction laws that are designed to prevent and suppress different types of violations, such as the theft of business’ classified information or personal data.
- Do not let technology destroy your health. Use it appropriately to minimize addiction that might result in health damage.
- Learn how to be safe when using technology. Install a data protection system that will destroy your sensitive data remotely when your devices are stolen.
However, we all know that not everyone follows the rules, and some break them in the form of bullying. Discrimination against a person on the grounds of skin color, class, or religion, is a form of bullying. An example of such a case was the Columbine High School massacre that took place on 20th April, 1999. The attackers were two students, Eric Harris and Dylan Klebold, who were frequent targets of bullying in the school just because their obsession was a computer game called “Doom”. They chose to spend more time with their game after classes than with their classmates, so they became the constant targets for humiliation. One day, they found several automatic guns at home, so they brought them to school and started shooting indiscriminatingly. This resulted in 13 deaths and 24 injuries before the pair killed themselves.
As we live in the internet era, bullying is expanding, turning into “cyberbullying” which includes defamation, revilement, blackmailing, and exposure of personal information in order to hurt or embarrass the target.
Cyberbullying can be divided into 7 categories (according to nobully.com);
- Gossip – sending out gossiping messages, hurting someone behind their back.
- Exclusion – excluding someone from online community groups on social network platforms like Line or Facebook.
- Nation – sneaking onto someone else’s account when the owner forgets to log out and posting something that might embarrass the account’s owner or cause misunderstanding. If you forget to log out from your Facebook at an internet café someone else may use it to post something indecent on your Timeline.
- Harassment – when people criticize or comment about someone using indecent terms or crush their feelings, undermining their self-esteem. For example, some netizens comment that they feel sorry for the rapist when the victim is not a good-looking one.
- Cyberstalking – sending messages, photos, videos or other content that causes embarrassment to other people in the online world. An example of this is the case of Tyler Clementi in September 2010. The 18 year-old was a student of Rutgers University in the United States, who asked his roommate to allow him to have the room to himself for one night. The curious roommate set his laptop and webcam to stalk and record Tyler. Watching the webcam remotely, the roommate saw Tyler bringing a man into the room, so he posted the video on Twitter, called Tyler a homo, and invited netizens to watch Tyler’s actions via the webcam. After that night’s incident, Tyler became really stressed, lost his self-esteem and mental strength, and eventually drowned himself in a suicide.
- Outing and trickery – teasing someone, making them lose control with the aim to expose their embarrassing behavior online.
- Cyber threat – joining cyberbullying instead of stopping it.
Digital Ethics and Privacy
In this digital era, where we are surrounded by various types of technology and artificial intelligence (AI), Digital Ethics and Privacy is being raised at all levels, with many sectors seeing its importance, especially governments and organizations which establish proactive campaigns to prevent possible problems. The World Ethics Forum was held to brainstorm and foresee the trend of technology in the future and also to plan for ways to deal with ethical problems that may arise in the future.
Google also tried to set up an AI Ethics Board to make suggestions and consider several types of risks that concern people. They are worried that AI might be uncontrollable in the future, with more development of technology as AI is equipped with more and more abilities to perform human tasks like medical examinations and treatment or vehicle control.
Robot and AI control laws
At the Technology Law Conference held by the International Bar Association (IBM) from May 18-19, 2017 in Brazil, there was a discussion about how the European Parliament’s Legal Affairs Committee has outlined laws for its member countries that determine the legal status of robots and AI as electronic persons. Moreover, they also set guidelines regarding the approach of member nations towards robots as follows:
- Every robot should have an emergency switch or a kill switch installed because AI is made with the ability to self-develop and it might become a threat to humans. Robots should also never be produced as weapons against humans.
- Emotions should never be instilled in robots. They should not have feelings, such as love or hate, the same way humans do.
- Insurance should be required for large robots with their owner and the production company responsible for the insurance fees. These two parties must be responsible for every problem that might occur if the robot’s control systems malfunction. Driverless vehicles also fall under this condition and must be tied to insurance as well.
- Robots should have the same rights and responsibilities as human individuals as they are legally determined to be electronic persons. A robot’s role or responsibility is shared with its owner and its maker. One of the roles is paying taxes which was a controversial issue when the guidelines were introduced for the robots in the European Union. The aim is to reduce the impact of robots on unemployment. The Parliament also agreed that robots should be charged with making Social Security Contributions and they should pay normal taxes like general citizens.
One of the many worries that people have is that AI will kick humans out of their jobs. This already happens in some countries, for example, a hotel abroad has started using robots instead of human staff as it helps them reduce costs and creates a new experience for the guests. This can happen with many other careers, as AI might eventually replace human workers.
However, instead of being concerned whether laws and regulations will be able to control AI, one of the things we can do now is to plan to use it wisely in each organization. One good example of this is a start-up company called Vymo that has developed a customer relationship management system for sales persons using AI technology.
Anyone interested in InnoHub Season 2 can be processed via https://www.bangkokbankinnohub.com