Field Trips Anywhere
CHO(HAN)Haejoang
Field Trips Anywhere
CHO(HAN)Haejoang

artificial intelligence, ethics and society 20200208

조한 2020.02.09 12:56 조회수 : 1267

closing keynote by gina neff 

Fri 8th, 4:30 pm – 5:30 pm 20200208
Title: From Bad Users and Failed Uses to Responsible Technologies:A Call to Expand the AI Ethics Toolkit
Gina Neff (Oxford Internet Institute, University of Oxford.)https://www.aies-conference.com/2020/invited-talks/#talk3
Gina Neff
Gina Neff
 
 
artificial unintelligence by broussard
 
algorithms of oppressions
 
automating inequality
 
 
 
 
​Chair: TBD
Abstract:

 

Recent advances in artificial intelligence applications have sparked scholarly and public attention to the challenges of the ethical design of technologies. These conversations about ethics have been targeted largely at technology designers and concerned with helping to inform building better and fairer AI tools and technologies. This approach, however, addresses only a small part of the problem of responsible use and will not be adequate for describing or redressing the problems that will arise as more types of AI technologies are more widely used.
Many of the tools being developed today have potentially enormous and historic impacts on how people work, how society organises, stores and distributes information, where and how people interact with one another, and how people’s work is valued and compensated. And yet, our ethical attention has looked at a fairly narrow range of questions about expanding the access to, fairness of, and accountability for existing tools. Instead, I argue that scholars should develop much broader questions of about the reconfiguration of societal power, for which AI technologies form a crucial component.
This talk will argue that AI ethics needs to expand its theoretical and methodological toolkit in order to move away from prioritizing notions of good design that privilege the work of good and ethical technology designers. Instead, using approaches from feminist theory, organization studies, and science and technology, I argue for expanding how we evaluate uses of AI. This approach begins with the assumption of socially informed technological affordances, or “imagined affordances” shaping how people understand and use technologies in practice. It also gives centrality to the power of social institutions for shaping technologies-in-practice.

Short Bio:

Professor Gina Neff is a Senior Research Fellow at the Oxford Internet Institute and the Department of Sociology at the University of Oxford. Science called her book, Self-Tracking, co-authored with Dawn Nafus (MIT Press, 2016), “excellent” and a reviewer in the New York Review of Books said it was “easily the best book I’ve come across on the subject—‘about the tremendous power given to already powerful corporations when people allow companies to peer into their lives through data.’” Her book about the rise of internet industries in New York City, Venture Labor: Work and the Burden of Risk in Innovative Industries (MIT Press, 2012), won the 2013 American Sociological Association’s Communication and Information Technologies Best Book Award. Her next book, Building Information: How teams, companies and industries make new technologies work is co-authored with Carrie Sturts Dossick, with whom she directed the Collaboration, Technology and Organizations Practices Lab at the University of Washington. A leader in the new area of “human-centred data science,” Professor Neff leads a new project on the organizational challenges companies face using AI for decision making.
She holds a Ph.D. in sociology from Columbia University, where she is a faculty affiliate at the Center on Organizational Innovation. Professor Neff has had fellowships at the British Academy, the Institute for Advanced Study and Princeton University’s Center for Information Technology Policy. Her writing for the general public appears in Wired, Slate and The Atlantic, among other outlets. As a member of the University of Oxford’s Innovation Forum, she advises the university’s entrepreneurship policies. She is the responsible technology advisor to GMG Ventures, a venture capital firm investing in digital news, media and entertainment companies. She is a strategic advisor on AI to the Women’s Forum for the Economy & Society and leads the Minderoo Foundation’s working group on responsible AI. She serves the steering committee for the Reuters Institute for the Study of Journalism, the advisory board of Data & Society and the academic council for AI Now, and is on the Royal Society’s high-level expert commission on online information.

목록 제목 날짜
432 Bruno Latour도 의견: 생산자체를 전환 2020.05.31
431 영화가 던져주는 화두 - 부르고뉴, 와인에서 찾은 인생 2021.06.18
430 초딩 소년들을 위한 영화 2020.11.30
429 [세상 읽기] 희망은 없다 / 신영전(한대 의대) 2020.02.06
428 돌아온 피케티 "사회적 소유, 일시적 소유" 2020.05.28
427 하와이 알로하 2020.02.05
426 유럽이 한국으로부터 배울 수 없는 것- 흥미로운 글 2020.04.12
425 코로나 시대 여성으로 사는 법 (이원진-해러웨이) 2021.05.09
424 그들이 우리는 먹여 살리고 있다 (농촌 이주 노동자) 2020.08.10
423 the prize winner 총명한 여장부 엄마에 대한 영화 2019.07.04
422 채혜원의 베를린 다이어리- 돌봄 간병 여성이 없다면 우리 사회는 멈춰 있을 것 2020.03.28
421 일년전 사회학 대회 때 글을 다시 읽게 된다 file 2019.11.26
420 <멸종 저항> 단어가 주는 힘 2019.05.18
419 이슬아의 상큼한 글 나눔 2020.04.18
418 오드리 탕 미래 교육 인터뷰 (여시재) 2020.11.18
417 [AI가 가져올 미래] 전길남인터뷰와 제페토 할아버지 2019.07.26
416 글을 고치다가 골병 들겠다- 민들레 글 file 2020.12.20
415 어린이날의 다짐 2019.05.05
414 [경향의 눈]‘세대주’라는 낡은 기준 2020.06.04
413 원룸 이웃 - 새로운 공동체의 시작 2020.06.02
412 유발 하라리 코로나 통찰 2020.04.30
411 AI 관련 책 추천 2020.02.21
410 사람이 사람에게 무릎 꿇는 세상은 (고정희) 2021.05.12
409 재미난 제주, 파상의 시대의 실험 2019.07.04
408 추석 연후에 보려는 영화 2020.09.28
407 기내 영화 다섯편 2019.08.18
406 책 읽어주는 여자 쨍쨍 2020.07.15
405 가족 덕에, 가족 탓에- 아기 대신 친족을! 2021.05.30
404 2019실패박람회 '지성인과의 대화-강연' 요청의 건 file 2019.07.24
403 라이프 3.0 인문학 사라봉의 실험 2019.06.05
402 실기가 아니라 관점과 언어 2020.12.30
401 video call fatigue- 실질적 논의들의 시작 2020.05.09
400 이슬아 편지 - 도통한 그녀들 2020.04.10
399 8월 5일 LA 다섯번째 날 2022.08.05
398 좋은 직장은 공부하는 직원들이 많은 곳 2019.08.06
397 방과후 교사의 자리 2020.11.30
396 좋은 글 채효정 사회 대협약 2020.04.26
395 홀가분의 편지- 사회적 영성에 대하여 2020.09.01
394 페미니스트 비평 -때론 시원하고 때론 불편한 2021.11.04
393 혼자보기 아까운 풍광 멤모스 레이크 file 2019.07.28