Field Trips Anywhere
CHO(HAN)Haejoang
Field Trips Anywhere
CHO(HAN)Haejoang

artificial intelligence, ethics and society 20200208

조한 2020.02.09 12:56 조회수 : 1265

closing keynote by gina neff 

Fri 8th, 4:30 pm – 5:30 pm 20200208
Title: From Bad Users and Failed Uses to Responsible Technologies:A Call to Expand the AI Ethics Toolkit
Gina Neff (Oxford Internet Institute, University of Oxford.)https://www.aies-conference.com/2020/invited-talks/#talk3
Gina Neff
Gina Neff
 
 
artificial unintelligence by broussard
 
algorithms of oppressions
 
automating inequality
 
 
 
 
​Chair: TBD
Abstract:

 

Recent advances in artificial intelligence applications have sparked scholarly and public attention to the challenges of the ethical design of technologies. These conversations about ethics have been targeted largely at technology designers and concerned with helping to inform building better and fairer AI tools and technologies. This approach, however, addresses only a small part of the problem of responsible use and will not be adequate for describing or redressing the problems that will arise as more types of AI technologies are more widely used.
Many of the tools being developed today have potentially enormous and historic impacts on how people work, how society organises, stores and distributes information, where and how people interact with one another, and how people’s work is valued and compensated. And yet, our ethical attention has looked at a fairly narrow range of questions about expanding the access to, fairness of, and accountability for existing tools. Instead, I argue that scholars should develop much broader questions of about the reconfiguration of societal power, for which AI technologies form a crucial component.
This talk will argue that AI ethics needs to expand its theoretical and methodological toolkit in order to move away from prioritizing notions of good design that privilege the work of good and ethical technology designers. Instead, using approaches from feminist theory, organization studies, and science and technology, I argue for expanding how we evaluate uses of AI. This approach begins with the assumption of socially informed technological affordances, or “imagined affordances” shaping how people understand and use technologies in practice. It also gives centrality to the power of social institutions for shaping technologies-in-practice.

Short Bio:

Professor Gina Neff is a Senior Research Fellow at the Oxford Internet Institute and the Department of Sociology at the University of Oxford. Science called her book, Self-Tracking, co-authored with Dawn Nafus (MIT Press, 2016), “excellent” and a reviewer in the New York Review of Books said it was “easily the best book I’ve come across on the subject—‘about the tremendous power given to already powerful corporations when people allow companies to peer into their lives through data.’” Her book about the rise of internet industries in New York City, Venture Labor: Work and the Burden of Risk in Innovative Industries (MIT Press, 2012), won the 2013 American Sociological Association’s Communication and Information Technologies Best Book Award. Her next book, Building Information: How teams, companies and industries make new technologies work is co-authored with Carrie Sturts Dossick, with whom she directed the Collaboration, Technology and Organizations Practices Lab at the University of Washington. A leader in the new area of “human-centred data science,” Professor Neff leads a new project on the organizational challenges companies face using AI for decision making.
She holds a Ph.D. in sociology from Columbia University, where she is a faculty affiliate at the Center on Organizational Innovation. Professor Neff has had fellowships at the British Academy, the Institute for Advanced Study and Princeton University’s Center for Information Technology Policy. Her writing for the general public appears in Wired, Slate and The Atlantic, among other outlets. As a member of the University of Oxford’s Innovation Forum, she advises the university’s entrepreneurship policies. She is the responsible technology advisor to GMG Ventures, a venture capital firm investing in digital news, media and entertainment companies. She is a strategic advisor on AI to the Women’s Forum for the Economy & Society and leads the Minderoo Foundation’s working group on responsible AI. She serves the steering committee for the Reuters Institute for the Study of Journalism, the advisory board of Data & Society and the academic council for AI Now, and is on the Royal Society’s high-level expert commission on online information.

목록 제목 날짜
432 오지랍의 정치학 2024.11.16
431 강원네트워크 2024.11.08
430 새로운 학교, 교사들의 즐거운 시작 file 2024.11.08
429 새로운 학교 강원 네트워크 플레이리스트 2024.11.08
428 김진호 안병무 오클로스론의 현재성 2024.10.05
427 안병무론 현존 2024.10.05
426 선악의 기원 2024.09.13
425 정경일의 글을 읽다 요약해본 것 2024.09.11
424 세계 민주주의의 날 톡 콘서트 티저 그리고 발표ppt file 2024.09.11
423 세계 민주주의의 날 발표문 file 2024.09.11
422 할머니 교회 창립 95주년 기념하는 글 2024.09.11
421 9/2-9/9 바쁜 서울 일정 중 영화 2024.09.11
420 퍼펙트 데이즈와 빔 벤더스 2024.09.11
419 낸시 프레이저 좌파의 길 식인 자본주의 2024.08.11
418 르 귄 SF와 미래 (세상 끝에서 춤추다) 2024.07.25
417 세상의 끝에서 춤추다. (어슐러 르 귄) 2024.07.25
416 <지금 여기 함께 있다는 것> 조문명 해제 중 2024.07.18
415 <지금 여기 함께 있다는 것> 퍼거슨 2024.07.18
414 <인류학으로 보는 SF> 추천의 글 2024.07.12
413 인류세 관련 정리가 잘 된 글 2024.07.10
412 조한의 말 구체성 상황적 진리, 푹 쉬고 소동 2024.07.03
411 저는 오늘 꽃을 받았어요. 2024.07.03
410 정성숙 모내기 하는 날 창비 주간 논평에서 2024.07.03
409 여성학회 40주년 기념 축하 글 file 2024.06.10
408 소크라테스와 제자들, 예수, 그리고 축의 시대 2024.05.20
407 스승의 날, 훈훈한 하자 동네 이야기 2024.05.09
406 쓰지는 않고 읽기만 한다 2024.05.05
405 4월 말에 본 영화들 2024.04.29
404 손희정 <손상된 행성 에서 더 나은 파국을 상상하기> 2024.04.29
403 신윤경 컬럼 20231101 2024.04.25
402 신윤경 컬럼 한라일보 4/24 2024.04.25
401 추천의 글 <기후 돌봄> 석 줄 2024.04.20
400 조민아 바이러스와 한국교회 file 2024.04.18
399 기후 돌봄 Climate Care 2024.04.15
398 윤석남 86세, 여전히 씩씩한 화백 2024.04.15
397 지관서가 세번째 정희진 소개 2024.03.30
396 이번 주 상경해서 본 영화 - 근대, 영화 감독, 그리고 희생자들 2024.03.24
395 인권축제 축하글 2024.03.24
394 4회 인권 축제 축사를 쓰다 말았다. 2024.03.24
393 지관서가 김남규 님이 보낸 삽화 2024.03.04