Field Trips Anywhere
CHO(HAN)Haejoang
Field Trips Anywhere
CHO(HAN)Haejoang

artificial intelligence, ethics and society 20200208

조한 2020.02.09 12:56 조회수 : 243

closing keynote by gina neff 

Fri 8th, 4:30 pm – 5:30 pm 20200208
Title: From Bad Users and Failed Uses to Responsible Technologies:A Call to Expand the AI Ethics Toolkit
Gina Neff (Oxford Internet Institute, University of Oxford.)https://www.aies-conference.com/2020/invited-talks/#talk3
Gina Neff
Gina Neff
 
 
artificial unintelligence by broussard
 
algorithms of oppressions
 
automating inequality
 
 
 
 
​Chair: TBD
Abstract:

 

Recent advances in artificial intelligence applications have sparked scholarly and public attention to the challenges of the ethical design of technologies. These conversations about ethics have been targeted largely at technology designers and concerned with helping to inform building better and fairer AI tools and technologies. This approach, however, addresses only a small part of the problem of responsible use and will not be adequate for describing or redressing the problems that will arise as more types of AI technologies are more widely used.
Many of the tools being developed today have potentially enormous and historic impacts on how people work, how society organises, stores and distributes information, where and how people interact with one another, and how people’s work is valued and compensated. And yet, our ethical attention has looked at a fairly narrow range of questions about expanding the access to, fairness of, and accountability for existing tools. Instead, I argue that scholars should develop much broader questions of about the reconfiguration of societal power, for which AI technologies form a crucial component.
This talk will argue that AI ethics needs to expand its theoretical and methodological toolkit in order to move away from prioritizing notions of good design that privilege the work of good and ethical technology designers. Instead, using approaches from feminist theory, organization studies, and science and technology, I argue for expanding how we evaluate uses of AI. This approach begins with the assumption of socially informed technological affordances, or “imagined affordances” shaping how people understand and use technologies in practice. It also gives centrality to the power of social institutions for shaping technologies-in-practice.

Short Bio:

Professor Gina Neff is a Senior Research Fellow at the Oxford Internet Institute and the Department of Sociology at the University of Oxford. Science called her book, Self-Tracking, co-authored with Dawn Nafus (MIT Press, 2016), “excellent” and a reviewer in the New York Review of Books said it was “easily the best book I’ve come across on the subject—‘about the tremendous power given to already powerful corporations when people allow companies to peer into their lives through data.’” Her book about the rise of internet industries in New York City, Venture Labor: Work and the Burden of Risk in Innovative Industries (MIT Press, 2012), won the 2013 American Sociological Association’s Communication and Information Technologies Best Book Award. Her next book, Building Information: How teams, companies and industries make new technologies work is co-authored with Carrie Sturts Dossick, with whom she directed the Collaboration, Technology and Organizations Practices Lab at the University of Washington. A leader in the new area of “human-centred data science,” Professor Neff leads a new project on the organizational challenges companies face using AI for decision making.
She holds a Ph.D. in sociology from Columbia University, where she is a faculty affiliate at the Center on Organizational Innovation. Professor Neff has had fellowships at the British Academy, the Institute for Advanced Study and Princeton University’s Center for Information Technology Policy. Her writing for the general public appears in Wired, Slate and The Atlantic, among other outlets. As a member of the University of Oxford’s Innovation Forum, she advises the university’s entrepreneurship policies. She is the responsible technology advisor to GMG Ventures, a venture capital firm investing in digital news, media and entertainment companies. She is a strategic advisor on AI to the Women’s Forum for the Economy & Society and leads the Minderoo Foundation’s working group on responsible AI. She serves the steering committee for the Reuters Institute for the Study of Journalism, the advisory board of Data & Society and the academic council for AI Now, and is on the Royal Society’s high-level expert commission on online information.

목록 제목 날짜
280 플라톤 아카데미 발표 개요 1.1 2023.08.15
279 5/13일 대학은 COVID 19 국면에 어떤 질문을 던져야 하나 file 2020.05.11
278 김영옥 흰머리 휘날리며 2022.03.05
277 역시 해러웨이 2021.07.30
276 공정한 입시가 아니라 교육을 바꾸어야 할 때 2019.10.03
275 8년이 지난 세월호 이야기 file 2022.11.18
274 명필름의 <당신의 부탁> file 2019.07.05
273 기후 위기 비상행동 2019년 9월 21일 file 2019.09.22
272 경향 컬럼 여가부 관련 2020.08.09
271 모두가 신이 된 호모데우스의 시대 2019.08.01
270 장자의 마음 "나를 믿기로 했다." 빈둥빈둥 2022.02.17
269 기본소득 컨퍼런스 발표 초록과 ppt file 2021.04.20
268 다시 서울로 2019.08.18
267 20대 남자와 여자의 거리 2021.08.12
266 오늘의 메모: 듣기를 명상처럼 -잘 듣기 2021.08.29
265 재난의 시대, 교육의 방향을 다시 묻다. 2022.03.19
264 개교하면 온라인 학습과 실공간 학습을 잘 엮어내야 2020.05.08
263 기본소득과 기초자산 (사회적 경제연구소) 2020.01.28
262 좋은 인터뷰 2020.05.20
261 장애가 장애가 아닌 삼달다방 file 2020.04.07
260 또문 리부팅 2021.11.02
259 3월 20일 동인지 모임 : '모녀/모성' 또는 '나를 살게 하는 것' file 2022.03.21
» artificial intelligence, ethics and society 20200208 2020.02.09
257 시편 정경일 선생의 글 중 file 2020.12.09
256 대한민국 살기좋은 나라.... 2020.09.25
255 11/21 서울 지식이음 포럼 축제 기조강연 file 2019.11.25
254 다섯편의 영화를 보고 LA에 왔다 2019.07.26
253 라이프 3.0 인문학 file 2019.11.26
252 아감벤 <내가 보고 듣고 깨달은 것> 중에서 2024.02.15
251 오름의 여왕 따라비에서 file 2019.07.07
250 어떤 ‘코로나 서사’를 쓸 것인가 (황정아) 2020.03.07
249 다시 칼럼 쓰기로 2020.01.20
248 기후 변화 학교 (표선) file 2020.11.16
247 큰 위기, 작은 소동, 그리고 재난 학교 file 2020.02.28
246 강릉 <2021 모두를 위한 기후정치> file 2021.11.03
245 어린이 선흘 마을 예술 학교 4/17-5/3 월수금 2023.03.31
244 대면 수업 시작, 혼란은 불가피함 2020.05.12
243 THE GREAT HACK, 더 이상 공정한 선거는 없다 2019.07.27
242 왜 지금 마을과 작은 학교를 이야기하는가? (춘천 마을 이야기) 2022.05.16
241 10만년 전 사건, 공감능력의 출현과 협동 번식 (허디) 2022.01.05