Клубове Дир.бг
powered by diri.bg
търси в Клубове diri.bg Разширено търсене

Вход
Име
Парола

Клубове
Dir.bg
Взаимопомощ
Горещи теми
Компютри и Интернет
Контакти
Култура и изкуство
Мнения
Наука
Политика, Свят
Спорт
Техника
Градове
Религия и мистика
Фен клубове
Хоби, Развлечения
Общества
Я, архивите са живи
Клубове Дирене Регистрация Кой е тук Въпроси Списък Купувам / Продавам 18:22 27.04.24 
Клубове / Общества / Непрофесионални / Вегетарианство Всички теми Следваща тема Пълен преглед*
Информация за клуба
Тема The challenge of making moral machines [re: Mod vege]
Автор Mod vegeМодератор (старо куче)
Публикувано21.03.24 02:19  





Artificial intelligence has the potential to improve industries, markets and lives – but only if we can trust the algorithms.

[image]https://media.nature.com/lw767/magazine-assets/d42473-022-00163-5/d42473-022-00163-5_22833642.jpg?as=webp[/image]
As applications for AIs proliferate, so are questions about ethical development and embedded bias.Credit: MF3d

In the waning days of 2020, Timnit Gebru, an artificial intelligence (AI) ethicist at Google, submitted a draft of an academic paper to her employer. Gebru and her collaborators had analysed natural language processing (NLP), and specifically the data-intensive approach of training NLP artificial intelligences (AIs). Such AIs can accurately interpret documents produced by humans, and respond naturally to human commands or queries.

In their study, the team found the process of training a NLP AI requires immense resources and creates a considerable risk of embedding significant bias into the AI. That bias can lead to inappropriate or even harmful responses. Google was skeptical of the paper’s conclusions, and was displeased that Gebru had submitted it to a prominent conference. The company asked Gebru either to retract the paper or remove any mention of Google affiliations. Gebru refused the terms. Within a day, she learned that she no longer had a job.

Gebru’s sudden ouster raised serious questions about the transparency, accountability and safety of AI development, particularly in private companies. It also crystalized concerns about AI algorithms that had been bubbling along for years.

Whether embedded in a natural-language processor or a medical diagnostic, AI algorithms can carry unintentional biases, and those biases can have real-world consequences. The manipulation of the Facebook algorithm to impact the 2016 United States presidential election is one frequently cited example. As another, Aimee van Wynsberghe, an AI ethicist at the University of Bonn in Germany, cites an abortive effort by Amazon to use an AI-based recruiting tool. The tool, which was tested between 2014 and 2017, drew the wrong lessons from the company’s past hiring patterns.

“When they put it in practice, they found that the algorithm would not select women for the higher-level positions, only for lower-level ones,” says van Wynsberghe.

Yet the development of AI continues to accelerate. The market for AI software is expected to reach US$63 billion in 2022, according to Gartner Research, and that is on top of 20% growth in 2021. Already commonplace in online tools such as recommendation or optimization engines and translation services, higher impact AI applications are on the horizon, particularly in large sectors like energy, include those in transportation, healthcare, manufacturing, drug development and sustainability.

Given the size and number of opportunities, the enthusiasm for AI solutions can obscure risks associated with them. As Gebru found, AIs have the potential to cause real harm. If humans can’t trust the very machines meant to help them, the true promise of the technology may never be fulfilled.

Smarter by the day

Although many AIs are programmed directly by humans, most modern implementations are built on artificial neural networks. The algorithms analyse data to identify and extract patterns, essentially ‘learning’ about the world as they go. The interpretations of these data guide the next step of analysis, or inform decisions made by the algorithm.

Artificial neural networks analyse data collaboratively in a manner roughly analogous to the neurons in the human brain, explains Jürgen Schmidhuber, director of KAUST in Saudi Arabia. He developed a foundational neural network framework known as ‘long short-term memory’ (LSTM) in the late 1990s.

“In the beginning, the learning machine knows nothing – all the connections are random,” he says. “But then over time, it makes some of the connections stronger and some of them weaker, until the whole thing can do interesting things.”

[image]https://media.nature.com/lw767/magazine-assets/d42473-022-00163-5/d42473-022-00163-5_22833638.jpg?as=webp[/image]
Artificial neural networks, a popular AI model, are trained on large data sets. Bias introduced into that data can unwittingly translate to the AI. Credit: Blackdovfx

Such training is a characteristic of LSTM and other approaches to neural networks, and it’s a reason those AIs have become so popular. An AI that learns to learn has the potential to develop novel solutions to extremely difficult problems. The FII Institute THINK initiative, for example, is pursuing a multi-pronged roadmap for AI development to explore healthcare applications such as drug discovery and epidemic control, as well as sustainability-oriented efforts to monitor and protect forest and marine ecosystems – all of which lend themselves to AI applications.

But training can build bad habits as easily as good ones. As Gebru found with NLP AIs, very large and improperly curated data sets can amplify rather than rectify human biases in an AI’s decision-making process. Sandra Wachter, a researcher specializing in data ethics at the University of Oxford in the United Kingdom, highlights the of diagnostic software tools designed to detect signs of skin cancer through image analysis, which fare poorly on black- or brown-skinned individuals because they were primarily trained on data from Caucasian patients. “It might be misdiagnosing you in a way that could actually have harmful consequences for your health and might even be lethal,” she says.

Similar training data problems have plagued IBM’s AI-driven Watson Health platform, and the company recently moved to itself of this technology after years of struggling with poor diagnostic performance and ill-advised treatment recommendations.

Such cases beg the question: Who is to blame when an algorithm does not work as designed? Answers may be easy to reach when an AI’s conclusions are objectively wrong, as in certain medical diagnostics. But other situations are much more ambiguous.

For years, Facebook enabled companies to target their advertising based on algorithmically derived information that allowed the platform to infer a user’s race, an option now discontinued. “Black people wouldn't be able to see certain job advertisements, or advertisements for housing or financial services, for example,” says Wachter. “But those people didn’t know about it.”

The victims of discrimination might have a claim in the courts after the fact. But the best solution is to pre-empt the introduction of destructive bias in the first place with ethical AI design.

Rules for robots

The idea of imbuing machines with ethics is not new. Author Isaac Asimov penned his Three Laws of Robotics when thinking of androids more than 75 years ago, and all three of his laws raise ethical considerations. In the research labs around the world, science fiction is now edging towards reality as researchers grapple with how to embed ethics into AI.

Current work entails identifying sets of internal guidelines that would be compatible with human laws, norms, and moral expectations, and could serve to keep AIs from making harmful or otherwise inappropriate decisions. Van Wynsberghe pushes back against the idea of calling such AI systems ‘ethical machines’ per se. “It’s like a sophisticated toaster,” she says. “This is about embedding ethics into the procedure of making the machines.”

In 2018, the Institute of Electrical and Electronics Engineers (IEEE), a non-profit organization headquartered in New York City, US, convened an interdisciplinary group of hundreds of experts from around the world to hash out some of the underlying ‘ethically aligned design’ for AI systems. Bertram Malle, a cognitive scientist specializing in human-robot interaction at Brown University in Providence, Rhode Island, US, who co-chaired one of the effort’s working groups, says, “We can’t just build robots that are ‘ethical’ – you have to ask ethical for whom, where and when.” Accordingly, the ethical framework for any given AI, Malle says, should be developed with close input from the communities of people with which they will ultimately be interacting.

from Wachter’s team highlighted some of this complexity. After assessing a variety of metrics designed to assess the level of bias in an AI system, her team determined that 13 out of 20 failed to meet the legal guidelines of the European Union’s non-discrimination law.

“One of the explanations is because the majority, if not all, of those bias tests were developed in the US… under North American assumptions,” she says. This work was conducted in collaboration with Amazon, and the company has subsequently adopted an improved based on the open-source toolkit that resulted from the study.

A trustworthy AI system also requires a measure of transparency, where users can get a clear sense of how an algorithm arrived at a particular decision or outcome. This can be tricky, given the ‘black box’ complexity and proprietary nature of many AI systems, but is not an insurmountable problem. “Building systems that are completely transparent is both unrealistic and unnecessary,” says Malle. “We need to have systems that can answer the kinds of questions that humans have.”

That has been another priority for Wachter’s team, which uses a strategy called ‘‘ ’ to probe AI systems with different inputs in order to determine which factors lead to which outcomes. She cites the example of interrogating diagnostic software with different metabolic parameters to understand how the algorithm determines that a patient has diabetes.

Ethics for all

If embedding ethics and transparency into AI is a difficult problem, the ethical and transparent development of AI, by humans, could be even more challenging. Private companies like Google, Facebook, Baidu and Tesla account for a large portion of overall AI development, while new start-ups seem to emerge on a weekly basis. Ethical oversight in such settings can vary considerably.

“We see glimmers of hope, where [companies] have hired their own ethicists,” van Wysnberghe says. “The problem is that they’re not transparent about what the ethicists are doing, what they’re learning – it’s all behind non-disclosure agreements.” The firing of Gebru and other ethicists highlights the precariousness of allowing companies to police themselves.

[image]https://media.nature.com/lw767/magazine-assets/d42473-022-00163-5/d42473-022-00163-5_22833636.jpg?as=webp[/image]
Among AI ethicists, improved transparency in AI development and outputs is a priority. Doing so could foster wider trust in the technology.Credit: da-kuk/ Getty images

But there are potential solutions. To overcome the opacity of private AI development, for example, van Wynsberghe advocates the notion that companies could collectively sponsor an independent ethical review organization to act analogously to the institutional review boards that supervise clinical trials. In this approach, corporations would collectively fund a board of ethicists to take on rotating ‘shifts’ at the companies to oversee work. “So you’d have this kind of flow of information and shared experiences and whatnot, and the ethicists are not dependent on the company for their paycheck,” she says. “Otherwise, they’re scared to speak up.”

New legal frameworks could help as well, and Wachter believes that many companies are likely to welcome some guidance rather than operating in an environment of uncertainty and risk. “Now examples are being put on the table that concretely tell them what it means to be accountable, what it means to be bias-free, and what it means to protect privacy,” she says.

The European Union currently leads the way, with an ‘’ that provides a detailed framework for the risk-based assessment of where AI systems can be deployed safely and ethically. China is also designed to prevent AI-based exploitation of or discrimination against users – although these same regulations could also provide a of online speech.

Above all, automation should not be seen as a universal solution and the collective good, for all humans not just AI developers, should always be a consideration. Malle favours a focus on systems that complement rather than replace human expertise in areas such as education, healthcare and social services. For example, AI could help overextended teachers to get a better handle on students who need more individual attention or are struggling in particular areas of the curriculum. Or AI could take care of routine tasks in the hospital ward, so that nurses can better focus on the specific needs of their patients.

The goal should be to amplify what can be achieved with available human intellect, expertise and judgement – not to take those out of the equation altogether. “I really see opportunities in the domains where we really don’t have enough humans or not enough trained humans,” Malle says. “Let’s think about domains of need first.”
__
To learn more about how AI could help solve grand challenges, while not doing harm in the process, visit the .



Цялата тема
ТемаАвторПубликувано
* Разни статии Mod vegeМодератор   18.10.21 05:58
. * Нар / Pomegranate Benefits Mod vege   18.10.21 06:01
. * Resistant Starch Mod vege   18.10.21 06:06
. * How to Make Your Own CoQ10 Mod vege   21.10.21 08:17
. * Peter Diamandis on how to live a longer life Mod vege   07.01.22 23:53
. * What & When to Eat for Longevity - David Sinclair Mod vege   28.01.22 06:16
. * Повишаване азотния оксид (особено след 40г.) Mod vege   02.04.22 00:03
. * Кратка история на зеленчуците Mod vege   11.05.22 15:13
. * Ето как се отразява канибализмът на човешкото тяло Mod vege   22.05.22 14:03
. * 7 ползи от интервалното гладуване освен отслабване Mod vege   28.08.22 03:06
. * Червеното месо, бактерии в червата и сърдечносъдов Mod vege   22.10.22 10:06
. * Intermittent Fasting for SERIOUS WEIGHT LOSS- Berg Mod vege   28.10.22 22:52
. * BLOCK the Side Effects of SUGAR, BREAD and ALCOHO Mod vege   31.10.22 01:21
. * Building Muscle Over Age 40 - Complete Guide Mod vege   08.11.22 22:33
. * Re: Building Muscle Over Age 40 - Complete Guide Mod vege   11.02.23 05:21
. * Intermittent Fasting over Age 40: Complete Guide Mod vege   09.11.22 09:33
. * How to Build Muscle with Fasting | The Guide Mod vege   12.03.23 07:58
. * Best Protein Sources for Vegan Keto (video) Mod vege   28.11.22 08:06
. * Периодичното гладуване възстановява нервите Mod vege   02.12.22 23:18
. * Lead and Cadmium Could Be in Your Dark Chocolate Mod vege   19.12.22 21:45
. * Ще могат ли хората да живеят на Марс? Mod vege   02.01.23 08:25
. * V-Label stellt neue Logos vor und markiert globale Mod vege   10.01.23 08:33
. * Тайните на успешното отслабване с контрол над хорм Mod vege   30.01.23 01:19
. * Картофи Mod vege   08.02.23 23:34
. * Brian Johnson: young again - 45 са новото 18 Mod vege   23.02.23 05:25
. * Brian Johnson: One meal, 23 hr fast,100% nutrition Mod vege   23.02.23 05:38
. * Re: Brian Johnson: interview (video) Mod vege   13.08.23 23:11
. * Пет начина да загубите тегло, докато спите Mod vege   25.02.23 02:55
. * Зеленият път на „Алгае България“ Mod vege   05.03.23 17:41
. * СБОГОМ, AI. ЗАДАВА СЕ ОРГАНОИДНИЯТ ИНТЕЛЕКТ, КОЙТО Mod vege   05.03.23 22:13
. * 6 храни, които ще повишат нивата ви на тестостерон Mod vege   08.03.23 03:35
. * Enzyme that can turn air into energy... Mod vege   11.03.23 01:54
. * Липса на сън -> сърдечни заболявания & възпаления Mod vege   12.03.23 00:43
. * The 5 Stages of Fasting (And The Benefits of Each Mod vege   26.03.23 08:01
. * TESTOSTERON & HGH STEIGERN DURCH INTERMITTIERENDES Mod vege   27.03.23 13:04
. * Д-р Минди Пелц Mod vege   23.04.23 01:28
. * Fasting effects after 12, 18, 24, 48, 72 hours Mod vege   21.06.23 22:32
. * Re: Intervallfasten - alle Infos Mod vege   29.06.23 00:23
. * Периодично гладуване: как да го правим правилно Mod vege   17.12.23 09:27
. * Different Types of Stem Cells and their Functions Mod vege   26.03.23 08:20
. * Side Effects of Stem Cell Therapy Mod vege   26.03.23 08:26
. * д-р Тошков - Адренална Умора Mod vege   22.04.23 22:45
. * 13 Steps to Stay Under 8% Bodyfat - Thomas DeLauer Mod vege   25.04.23 04:08
. * 12 Golden Rules for Fat Loss that - Thomas DeLauer Mod vege   18.05.23 16:23
. * Tongkat Ali & Fadogia Agrest for testosterone? Mod vege   27.04.23 05:45
. * How To Increase Testosterone in Men – Dr.Berg Mod vege   04.07.23 05:50
. * 7-те най-ужасни микроба в залата Mod vege   28.04.23 00:50
. * Bryan Johnson is reversing his age Mod vege   28.04.23 02:55
. * Правилно комбиниране на храни - д-р Х.М. ШЕЛТЪН Mod vege   07.05.23 21:49
. * тъй де ~@!$^%*amp;()_+   08.05.23 22:30
. * Мозъкът на възрастния човек е много по-практичен Mod vege   11.05.23 17:35
. * SLOW Carb Absorption to Lower Blood Sugar Mod vege   18.05.23 17:32
. * These 7 Foods BLOCK CARB Absorption Mod vege   18.05.23 17:37
. * Benefits of WALKING You Never Knew About Mod vege   18.05.23 19:23
. * The #1 Reason for “Stubborn” Belly Fat - Delauer Mod vege   18.05.23 21:29
. * 5 факта за изгаряне на мазнини Mod vege   23.07.23 13:10
. * Истинските лимити на тялото Mod vege   29.05.23 00:53
. * Cold-Water Immersion Benefits for Your Genetics Mod vege   01.06.23 05:07
. * Кой е ИИ / AI? (видео) Mod vege   03.06.23 01:06
. * ПОВЕЧЕ „ИЗКУСТВЕН“ ОТКОЛКОТО „ИНТЕЛЕКТ“ Mod vege   21.06.23 00:25
. * Диета според гените Mod vege   11.06.23 12:33
. * Guide for muscle sparing fasting Mod vege   27.06.23 00:38
. * Глупостите около гладуването, отслабването и напъл Mod vege   03.07.23 17:09
. * Does Vitamin D created by sun exposure get washed? Mod vege   12.07.23 00:38
. * Will a Shower After Sunbathing Wash Away Vitamin D Mod vege   12.07.23 00:51
. * Harward professor: How You Can Prevent Cancer Mod vege   16.07.23 15:27
. * Хормоните, които отделяме по времена тренировка Mod vege   07.09.23 00:38
. * Ставни проблеми и предпазване от тях Mod vege   28.10.23 14:54
. * Как да горим повече мазнини? Mod vege   11.09.23 04:01
. * Ще могат ли хората да живеят на Марс? Mod vege   16.09.23 14:39
. * Червен хайвер Mod vege   29.10.23 04:41
. * Какво представляват адаптогените... Mod vege   12.11.23 07:13
. * Флотация и нервна система Mod vege   17.11.23 19:34
. * Dr Daniel E. Lieberman, Harvard Prof.: 7 Lies... Mod vege   24.11.23 13:19
. * За естествената интелигентност Mod vege   09.12.23 13:11
. * За естествената интелигентност - P2 Mod vege   13.12.23 15:18
. * Is Consciousness Everywhere? P1 Mod vege   09.12.23 13:25
. * Is Consciousness Everywhere? P2 Mod vege   09.12.23 13:30
. * Създадоха функционален компютър с човешка мозъчна Mod vege   12.12.23 14:09
. * Хапче, което замества цяло хранене, беше представе Mod vege   14.12.23 07:36
. * How muscle hypertrophy works, giving a program Mod vege   19.12.23 01:07
. * ООН: Ограничаването на месото ще намали замърсяван Mod vege   20.12.23 22:00
. * Тежки метали в шоколада Mod vege   24.12.23 09:28
. * 'Вътрешният саламандър'-как възстановява хрущял... Mod vege   24.12.23 12:09
. * Best Way to Rid ALCOHOL (& SUGAR) Cravings Mod vege   25.12.23 09:02
. * What Are Advanced Glycation End Products (AGEs)? Mod vege   27.12.23 02:47
. * Reverse Glycation for a Longer and Healthier Life Mod vege   27.12.23 03:54
. * What is Allulose? - low calorie sugar Mod vege   02.01.24 19:22
. * Извлечи максимума от протеините, които приемаш Mod vege   09.01.24 02:21
. * Изпиваме хиляди частици пластмаса с един литърH2O Mod vege   10.01.24 03:28
. * Diet of Harvard Genetics Professor David Sinclair Mod vege   12.01.24 02:57
. * COUNTERING CARBAMYLATION CONSERVES MUSCLE:Nun Amen Mod vege   13.01.24 13:46
. * Емил Златев: 5 показателя за нормално хранене... Mod vege   09.02.24 03:58
. * Natto Nutrition Facts and Health Benefits Mod vege   14.03.24 04:54
. * How robots can learn to follow a moral code Mod vege   21.03.24 02:10
. * The challenge of making moral machines Mod vege   21.03.24 02:19
. * Честит 1-ви април! Забавен хороскоп Mod vege   01.04.24 20:54
. * Кратък наръчник за хранителните добавки Mod vege   14.04.24 01:19
Клуб :  


Clubs.dir.bg е форум за дискусии. Dir.bg не носи отговорност за съдържанието и достоверността на публикуваните в дискусиите материали.

Никаква част от съдържанието на тази страница не може да бъде репродуцирана, записвана или предавана под каквато и да е форма или по какъвто и да е повод без писменото съгласие на Dir.bg
За Забележки, коментари и предложения ползвайте формата за Обратна връзка | Мобилна версия | Потребителско споразумение
© 2006-2024 Dir.bg Всички права запазени.