Hope you enjoy reading this blog post.
If you want the Moris Media Team to help you get more traffic, just book a call.
Sunday, 17 November 2024
Companies want to embrace AI because it is the future. But what happens if they deploy it without first ensuring that it serves humans? We already know the repercussions, and Moris Media, India’s leading digital marketing agency has observed and even highlighted many such instances. This includes Amazon's sexism-bot, the robot that smashed one of the chess players' fingers, as well as generative AIs that have gone rogue, generating sexist and racist pictures.
This narrative is an effort to not call out additional similar faults, but rather to learn from one another and guarantee that such mishaps do not occur again. The United Nations picked up the gauntlet in April 2022, introducing the UNESCO set of AI Ethics, which over 190 nations approved. One can argue that this AI Ethics elicitation is the closest thing we have to a universal Ethical AI requirement. The listing may be beneficial as a background for some, but it is not always what they intend to utilize directly.
This is primarily because the list has been found to be too lengthy and the terminologies used are not too easy to comprehend for common folks. Furthermore, some companies developed their AI Ethics guidelines prior to the UNESCO release and later decided that their principles were sufficient; thus, there was no need to change their pre-existing proprietary approach.
DeepMind Ethics and Society was founded in 2017 to investigate and research the real-world implications of AI. The British unit of Alphabet, Google's parent business, feels that ethics must be an afterthought.
The business put its Responsible AI ideas into action with AlphaFold, a ground-breaking AI system that can take a protein's genetic coding and intelligently anticipate its three-dimensional structure.
DeepMind collaborated with its in-house Pioneering Responsibly team—which specializes in ethics and AI safety—from the start of the project to work through potential difficulties around the release of AlphaFold and its forecasts. One ethics researcher was involved.
Sparrow, a "useful dialogue agent that reduces the risk of unsafe and inappropriate answers," was launched earlier this year by the research company. DeepMind, on the other hand, saw Sparrow as a research-based, proof-of-concept model that is still being developed for deployment. It is also predicted that the future model would allow many languages, cultures, and dialects.
DeepMind red-teams its models, imagining the criminal ways someone may exploit or abuse the AI it is developing, or how someone would attempt to breach the technology. An innovative approach known as premortem is used, which is more akin to being akin to a Devil’s Advocate. A scenario is assumed that everything might get wrong giving analysts the time to find out the reasons for the same.
DeepMind's sibling firm, Google, established the Responsible AI and Human Cantered Technology (RAI-HCT) initiative in 2021 to undertake research and build methods, tools, and best practices to guarantee that AI systems are designed ethically, putting their AI Principles into effect at scale. However, Google's ethical research team has been in flux. Several exits were observed regarding the white tech organization's ethics.
An authority in the field of AI research for science and dependability has stated that he would say that DeepMind is probably one of the leading groups in this area, but in terms of sharing and deploying these models, they have been more thoughtful. The company has been working hard on safety and security, as well as the proper use of these technology.
To help the cause, Meta AI has begun taking little moves toward developing responsible services. The last two years has witnessed announcements that exhibit Meta’s intent to work with policymakers, experts, and industry partners and responsibly build Metaverse, the company's flagship product. Previously, in 2010, Facebook (now Meta) introduced facial recognition. However, after eleven years and over a billion facial recognition profiles, the company disabled the facial recognition system due to widespread privacy concerns around the world.
Meanwhile, 13% of the 11,000 employees laid off by Meta last week were part of a research team focusing on machine learning infrastructure called 'Probability,' which addresses privacy, integrity, and reliability as well as machine learning for people and more.
On the opposite end of the scale, Meta terminated its Responsible Innovation team in September 2022, a group entrusted with addressing any ethical problems about its goods.
In the same month, Elon Musk was questioned during a Tesla Day 2022 Q&A session if the firm had considered the long-term effects of walking robots on society.
Musk has emphasized repeatedly that he sees AI as an existential threat to humanity. One would imagine that if one is creating robots that will walk among us and expects millions upon millions of these robots to be sold for public and private use, it naturally presents Ethical AI challenges for humanity. However, Musk's response to the question suggests that the current efforts to investigate AI Ethics are premature.
Unfortunately, burying one's head in the sand when it comes to Ethical AI is a bad idea. As the robotic system progresses, including AI Ethics principles into the system will become more difficult and expensive. This is a short-sighted approach to dealing with Ethical AI concerns. AI Ethics is often seen as an afterthought. Perhaps one day it will raise its head, but for now, it is heads-down and full speed ahead.
The Power of Team Calendar: Boosting Efficiency and Collaboration with moCal
Read MoreMastering Business Time Management with moCal's Online Calendar For Business
Read MoreUnlocking Seamless Collaboration with moCal's Online Shared Calendar
Read MoreUnlocking the Power of 7-in-1 moCal: Redefining Efficiency in Modern Business
Read MoreElevating Personal Branding: The Moris Digital Doctors Prescription
Read More