Elon Musk has made some impact on everyone's life whether you realize it or not--his purchase of Twitter, the growing number of Tesla cars on the road, SpaceX, or his interview with Joe Rogan, just to name a few instances. But how closely should we listen to the world's richest man? To some, Elon Musk is the next Rockefeller (or Nikola Telsa, maybe Michelangelo?) and is viewed as a cultural icon. And, to others, he is a divisive character who, at times, seems to cause a stir just to cause one or just to prove a point. No matter your opinion, whenever someone who is helping to lead innovation in an industry speaks out about that industry, we should listen.
Why you should listen whether you like him or not...
Elon Musk has one of the best self-driving AI systems for his electric car brand, as well as invested in OpenAI, who created ChatGPT. Although he has stepped down from the OpenAI board, the man is highly invested in the development and understanding of AI while simultaneously seeking to do so in the safest manner. And yet, he recognizes the need for regulations in this infantile industry due to the uncertain future posed by AI.
At the National Governors Association in 2017, Musk warned the world that AI is "the biggest risk that we face as a civilization". He forebodingly imagines the potential evolution of a "deep intelligence" inside of our networks that could cause a war through propaganda or rerouting an airline to a warzone solely to maximize a stock portfolio. Quite a postapocalyptic dystopia. He continued about the dangers on an unregulated AI industry, stressing the potential of the loss of human jobs. Preposterous? Scary? The future? Like Musk, we just don't know. So, regulations and supervision are required--like he suggested.
But how do governing bodies implement regulations on a growing field without stifling development? Should governments implement regulars like the Artificial Intelligence Act? Or should the AI industry even exist? All great questions. I don't have those answers, and nor will I pretend like I am qualified to create such legislation.
AI has already become deeply imbedded in our vision of the future, but can humanity balance the unknown and our desire for growth without creating Musk's foreboding message? We need to be cautious with how AI is handled for humanities' sake because AI's meteoric progress over the past few years should raise eyebrows. Language generation technology (ability to re-create written word), speech recognition (as well as facial), deeper learning compacities, and automation are a few ways AI has advanced dramatically over the past few years. Additionally, AI is bleeding into healthcare and architecture--already proving to be just as good or better than their human counterparts at finding the solution to their respective problems. If governments allow the AI industry to remain mostly unregulated, fundamental and possibly catastrophic changes will happen to the workplace landscape.
Additionally, serious privacy as well as safety issues could easily arise in the coming years. AI could determine who gets heart transplants or which accident the first responder is dispatched to, who gets a raise at work or hired based off of a social issue the AI is promoting this month, or how much your auto insurance payment should be based of AI records of your driving--possibly even the implementation of a personal credit score based off driving record, carbon emissions, CCTV activity, and internet search history, and all regulated by a "deep intelligence". Only time will tell but we should heed Elon Musk's warning. You may not be as pessimistic as Musk (heck maybe you're optimistic), but I am advising you to be realistic about our future--AI is here, and it could be dangerous for our future.
So...Welcome the ominous future of AI. I hope being realistic is the right choice.
No comments:
Post a Comment