The Future of Work Capsules: Artificial Intelligence an existential threat to humanity in the not-too-distant future?

0

As a topic of ongoing debate and research, some experts believe that Artificial Intelligence (AI) could pose an existential threat to humanity in the not-too-distant future. There is however no consensus on whether AI can destroy humanity or not. In our everyday life, the use of driving navigation tools, to virtual assistants like Siri, Alexa, or Google Assistant are typical examples of the good side of AI. What then are the good, the bad and the ugly side of AI as the kind of intelligence used by these created systems and tools needs some ethical guidance at minimum. Are you using AI for the common good, what ethical considerations are you adopting. What is your experience? As a student of prophesy, it appears everything is ready or getting ready except God’s people?

Human beings possess intelligence and are creation of the Creator – the God Almighty. When machines, systems and software’s which are created things of man also attempt to exhibit cleverness in its way of doing things we refer to these as the use of intelligence which is non-natural and better referred to as– artificial intelligence. Let’s discuss the future of work looking at the artificial intelligence of our created things. Will this enhance our way of doing things or it will limit us for the future.

Artificial Intelligence (AI) is the intelligence of machines or software, as opposed to the intelligence of human beings or animals. It is a field that combines computer science and robust datasets to enable problem-solving. AI embraces sub-fields of machine learning and deep learning, which are frequently mentioned in combination with artificial intelligence. We can demonstrate AI’s use in the following ways; software’s learn patterns from information such as language, images, audio, and online behavior, using patterns from existing and new data to make predictions to perform tasks that normally require human astuteness.

AI systems can perform tasks commonly associated with human cognitive functions such as interpreting speech, playing games, and identifying patterns. They typically learn how to do so by processing massive amounts of data, looking for patterns to model in their own decision-making. In many cases, humans will supervise an AI’s learning process, reinforcing good decisions and discouraging bad ones.

It becomes challenging when these AI systems are designed to learn without supervision. Other forms we can look at is with self-driving cars, or when non-human’s understand human speech with the assistance of created tools and systems, advanced web engines as well as recommended systems, or we deploy generative tools competing in high level strategic games. That been said, machine and deep learning comes to play here where machine learning and deep learning are both types of AI.

Whiles machine learning is AI that automatically adapt with minimal human interference, deep learning is a subset of the machine learning that uses imitation systems to duplicate the learning process of the human brain.

In our everyday life, the use of virtual assistants like Siri, Alexa, or Google Assistant are typical examples we can look at. These virtual assistants use natural language processing and machine learning to understand and respond to voice commands, allowing users to perform tasks such as setting reminders for our numerous meetings for instance, playing music, or getting directions hands-free. Many can’t drive through the street of Accra without the use of such aids.

Another example is the use of recommendation systems by companies like Netflix, Amazon, and YouTube to recommend content or suggest products based on a user’s past behavior and preferences. These are just a few examples of how AI is being used to improve our daily lives. So I say be careful of your digital footprint.

If AI can improve our lives, will that be for the good or bad purposes

AI has the potential to be used for both good and bad purposes. On the positive side, AI can be used to improve our quality of life in many ways, such as by developing autonomous vehicles, facial recognition software, and virtual assistants like Amazon’s Alexa and Apple’s Siri as demonstrated earlier. When covid-19 hit our shores, we resulted to virtual engagements. In healthcare, education and several other industries, AI is used to improve efficiency and accuracy.

On the negative side, AI can also be used wickedly. For example, actors looking to do harm may use AI applications to target confidential or embarrassing data, or any data that can be used for ransom ware. I was so shocked recently to find out that, our mobile phones charges are all easy targeting tools this days stealing tons of information. Are we really safe? The challenges of AI are numerous so are the benefits. Are you preparing the right ways for the future of work?

AI can be used maliciously in several ways. For example, it can be used to automate the first steps of an attack through content generation, improve business intelligence gathering, speed up the detection rate at which both potential victims and business processes are compromised, and assist with the scale and effectiveness of social engineering attacks.

AI can also be used to learn to spot patterns in behavior, understand how to convince people that a video, phone call or email is legitimate when its not, and then persuade them to compromise networks over sensitive data. Additionally, AI can be used to disrupt the trade-off between scale and efficiency, allowing large-scale, finely-targeted and highly-efficient attacks.

Using AI for the common good, ethical considerations to adopt

Adopt these steps to prevent AI from being used maliciously;

Ethical guidelines: One approach is to develop ethical guidelines to ensure that AI is used for good and not for malevolent purposes. Is important to ensure when developing and using AI, to take into account ethical considerations, such as fairness and transparency, justice, responsible behavior, privacy, freedom, trust, dignity, solidarity, autonomy as well as kindness in our designs so as to be robust against adversarial attacks and are used in a way that is sustainable and promotes the common good.

Use AI for Intended purpose: Another approach is to monitor AI to ensure that it is being used for its intended purpose.

Public Education:  It is also important to educate the public about the potential dangers of AI, explore, prevent, and mitigate the use of artificial intelligence by hostile entities, and proactively participate in the security ecosystem surrounding artificial intelligence.

There are several steps that can be taken to ensure that AI systems are designed to be robust against adversarial attacks. Our adaptation and use of adversarial training, in which the AI system is trained on adversarial will support to improve its ability to detect and defend against such attacks. Another approach is to ensure that the data used to train the AI system is trustworthy and free from contamination. Additionally, it is important to have mechanisms in place for inspecting the training data and monitoring the behavior of the AI system to detect any signs of malicious activity by the use of performance assessment methods to change the development lifecycle for complex AI systems.

AI & Humanity Distraction, Regulation and Prevention

There is no consensus on whether AI can kill humanity or not. Some experts believe that AI could pose an existential threat to humanity in the not-too-distant future. Media reports available on CNN has it that, 42% of CEO’s surveyed in a Yale Summit were of the opinion that AI has the potential to destroy humanity in the not-too-distant future, five to ten years from now. At that same summit, 58% said they were not worried as this really cannot happen. Considering this is a topic of ongoing debate and research, opinions may vary. So I ask what you are thinking.

Six (6) proposed ways to prevent and regulate AI from destroying humanity

  1. Ensuring that AI is properly regulated by laws and ethics.
  2. Limiting and isolating AI: when we limit how much information and influence AI can acquire and we isolate AI from outside networks like the internet, it’s proposed the possibility of distraction is prevented or managed.
  3. Where AI take full advantage of humanlike options rather than open-ended normal one.
  4. Some experts also suggest using block chain technology to create a secure, trustless, and decentralized network will ensure that AI applications are not used to cause harm to humanity.
  5. Developing legally binding treaty: Some US senators, Leaders from the G7, European and non-European countries, the Council of Europe as well as the United Nations are calling for global regulations and coordination. For instance, the Council of Europe, a human rights organization with 46 member countries is finalizing a legally binding treaty for artificial intelligence by taking steps to design, develop and apply agreements in ways that will protect human rights, democracy and the rule of law. Meaning, the potential risk is identified already as a possibility. The council in sending a strong signal as put forward by the MIT technology review is said to be inviting other countries like Ukraine, Canada, Israel, Mexico and Japan to join the negotiating table to regulate AI.

In the UK, the government is equally taking broader steps to regulate AI by managing risk, looking at transparency issues, safety amongst others.

For the UN with 193 member countries, the organization is calling and wishing to be the global coordination body on AI issues.  It recently adopted a voluntary ethics framework ensuring among others things the non-use of AI for mass surveillance.

The European Union intend is considering the AI Act to regulate AI’s high-risk usage and regulation of AI in healthcare and educational sectors.

  1. Regulation of intellectual property rights by regulators will help loads. Regulators must consider delegating these or some of its enforcement rights to enforcement entities having the system and structures to manage the regulation properly.

Baptista is the author of the New Book: “Prepare for the Future of Work” and the CEO of FoReal HR Services. Building a team of efficient & effective workforce is her business. Affecting lives is her calling!  She is a Hybrid Professional, HR Generalist, public speaker, researcher and a prolific writer. You can reach her via e-mail on [email protected]   or follow this conversation on social media pages; @Sarahtistagh.   Facebook / LinkedIn/ Twitter / Instagram: FoReal HR Services.   Call or WhatsApp: +233(0)262213313.  Follow the hashtag #theFutureofWorkCapsules #FoWC

Leave a Reply