Field Trips Anywhere
CHO(HAN)Haejoang
Field Trips Anywhere
CHO(HAN)Haejoang

artificial intelligence, ethics and society 20200208

조한 2020.02.09 12:56 조회수 : 956

closing keynote by gina neff 

Fri 8th, 4:30 pm – 5:30 pm 20200208
Title: From Bad Users and Failed Uses to Responsible Technologies:A Call to Expand the AI Ethics Toolkit
Gina Neff (Oxford Internet Institute, University of Oxford.)https://www.aies-conference.com/2020/invited-talks/#talk3
Gina Neff
Gina Neff
 
 
artificial unintelligence by broussard
 
algorithms of oppressions
 
automating inequality
 
 
 
 
​Chair: TBD
Abstract:

 

Recent advances in artificial intelligence applications have sparked scholarly and public attention to the challenges of the ethical design of technologies. These conversations about ethics have been targeted largely at technology designers and concerned with helping to inform building better and fairer AI tools and technologies. This approach, however, addresses only a small part of the problem of responsible use and will not be adequate for describing or redressing the problems that will arise as more types of AI technologies are more widely used.
Many of the tools being developed today have potentially enormous and historic impacts on how people work, how society organises, stores and distributes information, where and how people interact with one another, and how people’s work is valued and compensated. And yet, our ethical attention has looked at a fairly narrow range of questions about expanding the access to, fairness of, and accountability for existing tools. Instead, I argue that scholars should develop much broader questions of about the reconfiguration of societal power, for which AI technologies form a crucial component.
This talk will argue that AI ethics needs to expand its theoretical and methodological toolkit in order to move away from prioritizing notions of good design that privilege the work of good and ethical technology designers. Instead, using approaches from feminist theory, organization studies, and science and technology, I argue for expanding how we evaluate uses of AI. This approach begins with the assumption of socially informed technological affordances, or “imagined affordances” shaping how people understand and use technologies in practice. It also gives centrality to the power of social institutions for shaping technologies-in-practice.

Short Bio:

Professor Gina Neff is a Senior Research Fellow at the Oxford Internet Institute and the Department of Sociology at the University of Oxford. Science called her book, Self-Tracking, co-authored with Dawn Nafus (MIT Press, 2016), “excellent” and a reviewer in the New York Review of Books said it was “easily the best book I’ve come across on the subject—‘about the tremendous power given to already powerful corporations when people allow companies to peer into their lives through data.’” Her book about the rise of internet industries in New York City, Venture Labor: Work and the Burden of Risk in Innovative Industries (MIT Press, 2012), won the 2013 American Sociological Association’s Communication and Information Technologies Best Book Award. Her next book, Building Information: How teams, companies and industries make new technologies work is co-authored with Carrie Sturts Dossick, with whom she directed the Collaboration, Technology and Organizations Practices Lab at the University of Washington. A leader in the new area of “human-centred data science,” Professor Neff leads a new project on the organizational challenges companies face using AI for decision making.
She holds a Ph.D. in sociology from Columbia University, where she is a faculty affiliate at the Center on Organizational Innovation. Professor Neff has had fellowships at the British Academy, the Institute for Advanced Study and Princeton University’s Center for Information Technology Policy. Her writing for the general public appears in Wired, Slate and The Atlantic, among other outlets. As a member of the University of Oxford’s Innovation Forum, she advises the university’s entrepreneurship policies. She is the responsible technology advisor to GMG Ventures, a venture capital firm investing in digital news, media and entertainment companies. She is a strategic advisor on AI to the Women’s Forum for the Economy & Society and leads the Minderoo Foundation’s working group on responsible AI. She serves the steering committee for the Reuters Institute for the Study of Journalism, the advisory board of Data & Society and the academic council for AI Now, and is on the Royal Society’s high-level expert commission on online information.

목록 제목 날짜
466 Deserter Pursuit,‘D.P’ 네플릭스 드라마 -폭력 생존자의 세계 2021.09.15
465 또문의 새해, 부지런한 글쓰기 2023.01.14
464 confronting gender binary -젠더의 경계 넘기 2020.07.28
463 호모데우스 시대의 축복 2019.06.19
462 3/28 추천글 쓰기의 기쁨 2022.03.28
461 비판적 작가의 재발견- 오웰의 장미 2022.12.04
460 할머니들의 기후 행동- 동네 공원에서 놀기 2022.02.10
459 장자의 시 2019.05.27
458 5/22 생애전환과 시대 전환 file 2019.11.26
457 fragility 연약함에 대해 file 2019.05.07
456 유발 하라리와 오드리 탕의 모험, 비상, 경계를 훌쩍 넘기 2020.07.28
455 < 활짝 웃어라!- 문화인류학자의 북한이야기> 추천사 2019.12.26
» artificial intelligence, ethics and society 20200208 2020.02.09
453 A green reboot after the pandemic 2020.04.12
452 이바쇼 2019.10.07
451 자기를 지키는 길은 글쓰기 밖에는 없다 2021.02.14
450 5/13일 대학은 COVID 19 국면에 어떤 질문을 던져야 하나 file 2020.05.11
449 서울시 온종일 돌봄 실태분석과 정책방안 2020.09.26
448 이코노미스트 기자의 인터뷰 (꼰대) file 2019.05.27
447 정의연, 피해자와 지원자 사이의 갈등 (박노자) 2020.05.31
446 시원 채록희의 영 어덜트 소설! 2020.12.27
445 코로나 19 신인류 시대- 들을 만한 이야기들 2020.04.30
444 영도 지역 문화 도시 지역문화 기록자 과정 file 2020.12.03
443 봉감독, 열정어린 청년기를 보낸다는 것 2019.06.05
442 이 시대 생기발랄한 이들 2020.06.02
441 아이들의 욕 2019.05.27
440 사회적 영성에 대하여 2021.01.01
439 심리학자 김경일 세대론 2020.04.30
438 어딘의 글방- 제목의 중요성 2021.02.16
437 small schools big picture 2020.09.21
436 장선생을 보내며 2021.01.07
435 하자야 고마워! 2019.05.07
434 성평등 관련 인터뷰 (서울 신문) file 2019.08.04
433 소년은 어떤 세상을 만나 어떤 어른이 되는가? 2020.07.14
432 중국의 AI 교육 광풍 소식 2019.08.04
431 제주 유네스크 잡지에 낸 글 2020.12.30
430 최승호 시인의 <말놀이 동시집> 2019.05.27
429 기후 변화 산호의 상태로 보는. 2020.11.30
428 무엇이 우리를 살게 하는가 2020.12.29
427 [세상 읽기] 희망은 없다 / 신영전(한대 의대) 2020.02.06