AI Doesn’t Want Your Job

News about artificial intelligence (AI) is hard to tune out these days. The technology is in explosion mode at the moment, and we are witnessing its application and entry into countless fields. Yet there’s a long road ahead in this narrative, especially when it comes to the implications AI is unleashing upon industries, and in some cases upon careers. Many workers have concerns about being replaced by AI, and there are certainly some job types that may be vulnerable to its powerful capabilities. The advancement of artificial intelligence has the potential to impact various job roles across different industries, but the widespread fears about jobs being replaced by AI may be somewhat unfounded.

Some Jobs Are Vulnerable to AI Displacement

The question for many is what jobs are potentially on the hook. It’s a complicated one to ask, because there are varying degrees of how much AI can step in as a human replacement. AI can take the load of performing repetitive, routine tasks, and even handle some of the outcome of its analysis. While the exact extent of displacement can vary, here are some job categories that may experience changes or potential displacement due to AI:

  • Routine and Repetitive Tasks: Jobs that involve repetitive tasks such as data entry, assembly line work, or basic customer service interactions can be automated with AI systems, leading to a decrease in demand for human workers in these roles.
  • Transportation and Delivery: The rise of autonomous vehicles and drones has the potential to impact jobs in transportation and delivery services, including truck drivers, taxi drivers, and couriers.
  • Customer Support: Chatbots and virtual assistants are increasingly being used for customer support, reducing the need for a large number of human customer service representatives. While AI can handle basic inquiries, human support may still be required for more complex or empathetic situations.
  • Data Analysis and Research: AI-powered algorithms and machine learning systems can quickly process and analyze large volumes of data, potentially impacting jobs in data analysis and research. However, human expertise will still be necessary for higher-level analysis and business decision-making.
  • Manufacturing and Warehouse Operations: Automation using AI and robotics can streamline manufacturing processes and warehouse operations, leading to a reduced demand for manual labor in these fields.
  • Financial Services: AI algorithms are being employed for tasks like automated trading, fraud detection, and risk assessment, potentially affecting jobs in financial analysis, auditing, and certain aspects of banking.
  • Healthcare Diagnostics: AI has shown promise in medical imaging analysis and diagnostics. Because it can assist healthcare professionals in interpreting results, it may impact jobs in radiology and pathology.

The list continues and can be expanded by imagining the kind of positions that could be tedious, repetitive, and ultimately costly to the organization as candidates for AI solutions and displacement.

It is important to note that while AI may automate certain tasks, it also has the potential to create new job opportunities and transform existing roles. Many industries are adopting AI to augment human capabilities rather than replace them entirely. As technology advances, it is crucial for individuals to adapt and acquire skills that complement AI systems to remain relevant in the evolving job market.

Where AI is Already Getting to “Work”

The application of AI technology extends beyond traditional domains, with various industries harnessing its capabilities to drive innovation and improve processes. One fascinating example comes from the beauty industry, where AI algorithms are revolutionizing eyelash extensions. As highlighted by The Washington Post, computer vision and machine learning algorithms are being used to analyze facial features to recommend customized eyelash styles based on factors such as eye shape, lash length, and desired outcome. This integration of AI empowers beauty professionals with invaluable insights and personalized recommendations, elevating the overall customer experience.

AI’s creative potential is also making strides as evident in the work of Meta, a prominent player in social media and advertising. Meta’s pioneering open-source AI technology, MusicGen, is designed to generate original songs based on text and melody inputs. By utilizing natural language processing and deep learning algorithms to comprehend the conveyed context and emotions, MusicGen transforms inputs into unique musical compositions. This remarkable development demonstrates how AI can enhance human creativity and reshape the music industry, blurring the boundaries between human and machine collaboration in artistic endeavors.

In healthcare, AI-powered systems are revolutionizing medical image analysis, diagnosis, and treatment planning, ultimately leading to more accurate and efficient patient care. By leveraging AI algorithms healthcare professionals can achieve higher precision and streamline decision-making processes, ensuring improved outcomes for patients.

AI’s impact is being felt in the manufacturing sector as well. Manufacturers are increasingly relying on AI to optimize production processes, minimize downtime, and enhance quality control. By harnessing AI technologies, businesses can unlock new levels of efficiency, productivity, and operational excellence.

These examples highlight the diverse ways in which AI is already making a tangible impact across industries. As organizations continue to embrace and explore the potential of AI, we can anticipate even greater advancements that will transform how we live, work, and interact with technology.

Empowerment Through AI

These emerging use cases teach us a valuable lesson: the integration of AI into the workforce won’t involve an HR person tapping you on the shoulder and replacing your job with an AI. Instead, these examples demonstrate how AI is being introduced to enhance efficiency across various aspects of our work. The ultimate value of AI lies in its ability to augment human capabilities, recognizing that humans are indispensable contributors to productivity and the resources businesses rely on. Rather than displacing human workers, AI empowers them to perform their jobs more effectively, allowing for the optimization of processes and the realization of greater outcomes. AI simply acts as a catalyst for improved job performance.

The evolution and integration of AI into industries and economies are ongoing processes that require careful consideration and foresight. When applied intelligently, AI holds transformative possibility that can maximize human potential within the workforce. By leveraging AI as a tool for empowerment, businesses can unlock new levels of productivity and innovation. This approach serves as the foundation for a future where AI and human collaboration together drive prosperity and technological advancement. Embracing the symbiotic relationship between AI and human intelligence is key to cultivating a prosperous future where both can thrive.

Risking it All on Artificial Intelligence

Hopefully, thought leaders envision and encourage AI in a way that prioritizes collaboration rather than focusing on replacing humans. As we navigate these new waters, it’s imperative to consider the risks associated with driving a narrative solely centered on replacing human workers. This approach carries ethical concerns, potential drops in productivity, reputational damage, and exposure to the unknown. Any industry that rapidly displaces a critical component of their workforce is likely to encounter significant challenges, regardless of the possibilities.

It’s undeniable that artificial intelligence presents a remarkable opportunity for transformation and innovation across industries. While it is important to address concerns about job displacement and establish robust AI regulations, a balanced perspective is crucial. By identifying vulnerable sectors, harnessing the streamlining potential of AI, advocating for responsible regulations, addressing ethical considerations, and fostering ongoing collaboration, we can navigate the path of generative AI. Through thoughtful, informed decision-making, we can ensure its positive impact on our society, workforce, and economic landscape.

By striking a balance between innovation and human well-being, we can steer the course of AI towards a future that maximizes benefits and minimizes risks. We can leverage the capabilities of AI while upholding our ethical responsibilities.

This article was originally published in Forbes, please follow me on LinkedIn.

AI-Driven Transformation: Insights And Pitfalls

The potential transformative power of artificial intelligence (AI) is undeniable, positioning this technology as a significant force shaping the future of business. However, achieving industry-wide change is a journey filled with milestone moments, rapid advancements, and gradual adoption. Amidst these elements lie numerous challenges, and even industry giants like Google are exercising caution as they navigate the potential implications of AI. As Matthew Prince, CEO of Cloudflare, aptly puts it, typing confidential information into chatbots can be akin to “turning a bunch of PhD students loose in all of your private records.”

In this complex landscape, it becomes essential to explore both the valuable insights and potential pitfalls associated with AI-driven transformation. By delving into these aspects of AI, we can better equip ourselves to navigate the intricacies of implementing the technologies effectively and responsibly.

Big Google’s Big Irony Indicates Industry Concerns

Google, supposedly a prominent supporter of AI technologies, has joined the growing list of companies expressing caution about the use of AI. In a recent communication to its engineers and staff, Google emphasized the need for caution when it comes to entering confidential information into chatbots and utilizing computer code generated by AI tools. The company’s internal memo draws attention to the potential downsides and risks associated with AI-powered chatbot technology.

Ironically, while issuing this warning to its own employees, Google also recently made updates to its privacy policy, allowing the company to gather information individuals publicly share online to train its AI models. This move has sparked questions about privacy, web scraping practices, and the steps internet users can take to safeguard their data.

These ethical concerns, along with the financial risks, security vulnerabilities, and privacy implications raised in Google’s employee notice, have far-reaching implications for the industry. They underscore the urgent need for responsible AI deployment and highlight the crucial role of building trust with customers and stakeholders. By addressing these concerns head-on, the industry can strive towards a future where AI technologies are deployed in an ethical and responsible manner, ensuring the protection of user data and promoting transparency in AI-driven processes.

A Guide to AI Concerns

While it is expected that AI will continue to see adoption and evolution, it is crucial to exercise caution when dealing with sensitive information. The industry must be mindful of the potential risks associated with the use of autonomous technology in general, and specifically with AI. It must take appropriate measures to protect sensitive data, including:

Access Restrictions to Sensitive Data

This should be familiar territory, but when it comes to AI sensitive information should be strictly safeguarded. This includes confidential business data, intellectual property, trade secrets, personal information, and more. Solutions that engage in this field should include proper multifactor authentication and roles-based access throughout its underpinnings, to minimize risk and prevent unauthorized data exposure.

Employee Training and Awareness

The human factor is always a concerning focus, meaning that at some point, it needs to be communicated that AI systems must be worked with responsibly. Incurring education, training, and messaging, a human-focused improvement program can significantly reduce the likelihood of unintentional data leakage, and actually may be one of the most significant tools available today.

Ongoing Vulnerability Assessments

With the rapid advancements in AI technologies allowing them to sound human, it is essential to conduct regular vulnerability assessments and penetration tests to identify potential weaknesses where AI systems are integrating into the enterprise environment. Employing robust cybersecurity measures, such as comprehensive security, intrusion detection, and prevention systems, can help enhance the overall security posture of the organization. Inevitably, anomaly detection and response will be huge in the prevention of cyber incidents and data loss.

Vendor Due Diligence

When partnering with third-party vendors for AI implementation and development, conducting thorough due diligence is essential. You cannot let this become a gap; it’s essential to assess third-party security protocols, data handling practices, and compliance with industry standards. This will help ensure proprietary information remains protected throughout the AI lifecycle.

Know What You’re Doing At All Times

In the realm of AI, the age-old saying of “buyer beware” takes on a new meaning: “user beware.” Throughout the entire journey with AI, it is crucial for us, as humans, to remain aware of when we are interacting with an AI system. As these interactions often occur through channels that mimic human communication, it is essential for businesses to clearly disclose the presence of AI.

By being transparent about AI involvement and acknowledging its advanced potential and limitations, users can establish a foundation of trust and productivity while upholding ethical considerations. This awareness enables users to navigate the AI landscape more effectively and make informed decisions about their engagement with AI technologies.

However, we must recognize that we are only at the beginning of the artificial intelligence age. This stage can be seen as the early adoption phase, where responsible implementation of these technologies must be designed and baked in. What we build now will shape the path towards positive impact and desired outcomes into the future. It is the responsibility of technology stakeholders to drive the ethical and effective use of AI, introducing advantages while maintaining a commitment to responsible practices.

As we move forward it is crucial for stakeholders to prioritize responsible AI implementation, considering the long-term implications and striving for beneficial outcomes. By doing so, we can harness the full potential of AI while ensuring ethical considerations and positive societal impact.

This article was originally published in Forbes, please follow me on LinkedIn.

Almost Human: The Threat Of AI-Powered Phishing Attacks

Artificial Intelligence (AI) is undoubtedly a hot topic, and has been hailed as a game-changer in many fields including cybersecurity. There is much buzz about it, from the good, to the bad, and everything in between. Even Elon Musk and other tech leaders are advocating for AI development to be curbed, or at least slowed. While there are untold scintillating and amazing implications for AI technology in society, there are also plenty of bad and strange things that could happen. This is something we discussed in detail when the Metaverse was all the craze, but all of the technological scenarios pale in comparison to what happens when the plainest, simplest of threats wind up in the wrong hands.

Think Like a Hacker

As with any technological advancement, with AI there is always the potential for malicious misuse. To understand the impact of AI on cybersecurity, we need to first think like a hacker. Hackers like to use tools and techniques that are simple, easy, effective, and cheap. AI is all those things, especially when applied in fundamental ways. Thus, we can use our knowledge of the hacker mindset to get ahead of potential threats.

Aside from nation-state sponsored groups and the most sophisticated cyber hacker syndicates, the commotion over cyber hackers using AI in advanced technological ways is missing the bigger, more threatening point. AI is being used to mimic humans in order to fool humans. AI is targeting YOU, and can do so when you:

  • Click on a believable email
  • Pick up your phone or respond to SMS
  • Respond in chat
  • Visit a believable website
  • Answer a suspicious phone call

Just as AI is making everyday things easier, it’s making attacks easier for cybercriminals. They’re using the technology to write believable phishing emails with proper spelling and grammar correction, and to incorporate data collected about the target company, its executives, and public information. AI is also powering rapid, intelligent responses to messages. AI can rapidly create payloaded websites or documents that look real to an end-user. AI is also used to respond in real time with a deep faked voice, extracted from recording real voices through suspicious unsolicited spam calls.

Just the Beginning

Many of the hacks on the rise today are driven by AI, but in a low-tech way. AI tools are openly available to everyday people now, but have been in use in dark corners of the internet for a while, and often in surprisingly simple and frightening ways. The surging success rate for phishing campaigns, MITM (Man in the Middle attacks), and ransomware will prove to be related to arrival of AI and the surge of its adoption.

The use of AI in phishing attacks also has implications for the broader cybersecurity landscape. As cybercriminals continue to develop and refine their AI-powered phishing techniques, it could lead to an “arms race” between cybercriminals and cybersecurity professionals. This could result in an increased demand for AI-powered cybersecurity solutions, that might be both costly and complex to implement.

Cybersecurity Response

To protect against AI-powered phishing attacks, individuals and businesses can take several steps including:

  • Educating about the risks of phishing attacks and how to identify them
  • Implementing strong authentication protocols, such as multi-factor authentication
  • Using anti-phishing tools to detect and prevent phishing attacks
  • Implementing AI-powered cybersecurity solutions to detect and prevent AI-powered phishing attacks
  • Partnering with a reputable Managed Security Services Provider (MSSP) who has the breadth, reach, and technology to counter these attacks

AI is becoming ubiquitous in homes, cars, TVs, and even space. The unfolding future of AI and sentient technologies is an exciting topic that has long captured the imagination. However, the dark side of AI looms when it’s turned against people. This is the beginning of an arms escalation, although there is no AI that can be plugged into people (yet). Users beware.

This article was originally published in Forbes, please follow me on LinkedIn.

Is the metaverse safe?

An immersive new virtual realm is an exciting undertaking, but without a properly executed security plan, things could go terribly wrong. Read this piece from Ntirety CEO Emil Sayegh, originally published in Forbes, for insights on security concerns with the all-new Metaverse. 

Is the metaverse safe? 

If it isn’t clear by now, it will be soon: the metaverse is coming. While still only a concept, all this talk about virtual worlds, brain chips, tactile interfaces and artificial intelligence (AI) can only mean these technologies will soon come together. Many folks will get wrapped up in this merger of the virtual world with the physical world once the metaverse fully arrives. Unfortunately, anytime new and exciting technologies emerge, cybersecurity is often an afterthought. Cybersecurity will be the Achilles heel of the metaverse. Without a total base-level security build, the entire metaverse will face significant issues that could take years to unravel. 

Welcome to the unsafe metaverse 

The first known mention of a metaverse came about in science fiction back in the 1990s. More recently, Facebook stepped in and transformed itself (and its name) towards a new concept of a personal, customized, and interactive virtual world that it is building while burning $500 billion of market cap in the process.  

Unmute 

By most definitions, however, the metaverse will be a place where physical meets virtual and boundaries between the two become increasingly faint. It will eventually incorporate our world of work, our friendships, where we shop, how we spend our free time, what we eat, how we learn, and countless other applications. The metaverse will have access to our most private information and habits. As people begin to live in these virtual worlds, the metaverse will be able to learn a lot about us, others, and things we would barely consider today.  

If the metaverse is an inevitability, then it is our moral obligation to build one that is safe, private and secure. With the advent of the metaverse, we are going to have to rebuild, redefine and relearn so many things we take for granted in the “real world.” 

What does it mean when you close and lock your front door? Or how about your call screening? How do the security protocols in your life look when you are at home versus how they come in when you are in a public place? How do you know who you are talking to?  The metaverse has so many unknowns that it just cannot possibly be considered safe, by any standards.  

The wild west of the metaverse  

Cue the image of Clint Eastwood for this — at this moment, the metaverse is the wild, wild West. A lawless land that few dare venture into — but just like the old west, some people are ready for the metaverse. Instead of old-fashioned bandits and outlaws, they’re called hackers, scammers and various other names.  

Nefarious types historically gravitate to new technologies in search of opportunities. Already, there are reports of scams in NFT transactions, fraud in Ethereum addresses, and several other types of abuse. Now please remember, all Facebook did was change their name to Meta.      

Where was their plan and commitment to privacy, security or mental health of the users? Crypto, NFTs and smart contracts will undoubtedly be a fundamental part of the metaverse construct. Cyberbullying, doxing, ransom scams and other familiar schemes will also swiftly make their way over to the metaverse and they will be there early. Criminals are attracted to an environment where rules don’t exist, and victims have limited rights. 

One of the biggest risks in the metaverse will be data security and privacy. Before the metaverse, layers of abstraction existed, thanks to the physical world and our carefully balanced engagement through smartphones, computer systems, and apps. In the metaverse, significant engagement will run through artificial and virtual reality systems, creating a nexus point of data that is ripe for targeting. Data collection alone is cause for significant concern, with biometric, behavior, financial, profile information and troves of additional personal information built in.   

Garbage in, garbage out 

If you have been in information technology long enough, you are familiar with the phrase garbage in, garbage out. It’s a bad way of doing things and before we start packing up and moving to the metaverse we must make sure we will be ready for things such as:  

  •       Social engineering. As we’ve seen in corporate and individual scenarios, social engineering can lead to a massive loss of data, loss of access, and have financial implications. This is among the primary vectors for data breaches.  
  •     Blockchain security. Blockchain itself is strong on the validation of transactions and data. However, the integration of blockchain is an additional concern that bears scrutiny. For example, with just a bit of misdirection, an infiltrator can stage the interception and ownership of data. The network, identification, validation, and supporting DNS structures are examples of technical elements that must be secured. 
  •     Privacy concerns. The issues that plague us on the web and in databases everywhere will plague us in the virtual world. Data collection, retention, and sharing are just some of the examples that require definition, the establishment of individual rights, and regulation. 
  •       Digital boundaries. Users must maintain their rights of privacy and engagement with others. This matter could be complicated by the fact that there are no countries in the metaverse and no corresponding jurisdictions now. 
  •       Security on data transactions. From purchases to smart contracts, a binding construct will drive the exchange of data. The security of these transactions is critical to the success of the metaverse. Time will tell the extent of how general transactions may be regulated, taxed, and reported. 
  •       Identity of users. We are, in the physical world, what we are. Our being is tangible. One of the things that will have to be determined is what happens when an exact copy of your digital self is created or restored from a backup. If there’s a conflict, what version should continue to exist? What if a corrupted or erroneous copy comes into existence? What if that copy is intentionally modified or unintentionally wiped out?  
  •       Identity of others. Metaverse existence begins with avatars, a visual and perhaps audio-based representation of whatever that opposing creator put together. That user’s identity is questionable until you can confirm who they are in some real-world way that you trust. What about the inevitable presence of bots as we saw in the “meme stock” sagas? Are they friendly bots? Will you even know when you are engaging one? 

Concerns unchecked 

Let us not spoil what the metaverse can be by leaving these security and privacy concerns unchecked. Let us minimize, and hopefully avoid, the deafening noise and infiltration of non-human influence found on social media channels and online forums. The best metaverse is a genuine metaverse forum for humans void of bots and hackers.   

The metaverse is a concept that is launching lots of discussions and it is a likely part of our collective futures, but it needs to be a force for good. For now, the concept is vague, but the cybersecurity challenges ahead of us are clear, and we can act on those right now. 

 

Check out this piece, originally published in Forbes, here and follow me on LinkedIn. 

2022 Cyber Realities

While 2022 holds promise for a better future through advancements in technology, new cyber risks will come along with it. We must move forward with a positive mindset, while not forgetting past mistakes. Originally published in Forbes2022 Cyber Realities builds on Ntirety CEO Emil Sayegh’s Predicting What 2022 Holds For Cybersecurity piece published prior.

Looking to the Future

In addition, to my top ten predictions posted on January 6th, here are a few more: 

  1. Ransomware Will Continue to Evolve

Ransomware, which is malware that encrypts a user’s data and demands a ransom payment to unlock it, is one of the most rapidly evolving cyber threats. Ransomware attacks continue to cost businesses billions, a trend that is expected to continue and attacks that ask for larger ransom amounts. This is a market, and incentive will drive innovations and evolution in an already rapidly changing and challenging arena of cat and mouse.  

  1. Blockchain Technology Will Be Used for More Security, Finally

Blockchain technology is often associated with cryptocurrencies like Bitcoin, but it can actually be used for so much more. Companies are already using blockchain to secure business data, improve cybersecurity, and protect user privacy. In 2022, many businesses will have moved their operations to the cloud – instead of having physical servers on-site – making protections from cyberattacks a priority. Blockchain technology can help to secure these cloud-based operations by creating a tamper-proof record of all transactions.  

  1. Employees Will Be a Major Source of Cybersecurity Threats

Employees are often the weakest link in a company’s cybersecurity defenses. They can be tricked into opening emails that contain malware, clicking on links that lead to phishing scams, and using unsecured Wi-Fi networks. In 2022, businesses will need to focus more on employee training and awareness to protect themselves from these types of attacks.   

As cyberattacks become more sophisticated, businesses will also look to AI, machine learning, and monitoring services to help them detect and respond to these insider-based threats.  

  1. Will the Password Become Obsolete?

Even though new technologies that can replace passwords are emerging, they won’t be very popular by 2022. These technologies include fingerprint scanners, eye scanners, and facial recognition. They are not very user-friendly and can be easily hacked.   

As a result, 2022 will still see the use of passwords for the foreseeable future. However, organizations should start to move away from using passwords and towards using two-factor authentication. Two-factor authentication is a more secure way of logging in that requires users to input a password as well as a randomly generated code that is sent to their mobile device. This will make it much more difficult for hackers to gain access to your account. It’s a step in the right direction as passwords are extremely fallible. 

  1. Governments Will Finally Realize How Much They’ve Lost Due to Lax Cybersecurity

State and regional governments have been slow to adopt new security measures because they have been underestimating the power of cybercrime. They think that their current policies are enough to protect them from attacks. But as more and more breaches happen, it becomes clear that this is not the case. In 2022, governments will finally realize how much they’ve lost due to lax cybersecurity and they will start to take action. They will allocate more resources to improving their security infrastructure and they will also work with businesses to ensure better protection of their data. 

  1. The use of AI for Cybersecurity Purposes Will Increase Exponentially

As mentioned earlier, the use of AI is going to increase exponentially in the next few years. This will be especially true for cybersecurity purposes. Cybersecurity companies will escalate the use of AI-based tools to detect and prevent cyberattacks. These tools will be able to analyze data at a much faster pace than humans and they will also be able to identify new threats that wouldn’t have been seen before. 

Looking forward to 2022, we must fully incorporate and reflect on the key cybersecurity events of the year behind us. There are valuable lessons, a bit of dirty laundry to clean still, and a challenge that should always be at the forefront of our operations. 

 

Check out this piece, originally published in Forbes, here and follow me on LinkedIn.