Field Trips Anywhere
CHO(HAN)Haejoang
Field Trips Anywhere
CHO(HAN)Haejoang

artificial intelligence, ethics and society 20200208

조한 2020.02.09 12:56 조회수 : 1264

closing keynote by gina neff 

Fri 8th, 4:30 pm – 5:30 pm 20200208
Title: From Bad Users and Failed Uses to Responsible Technologies:A Call to Expand the AI Ethics Toolkit
Gina Neff (Oxford Internet Institute, University of Oxford.)https://www.aies-conference.com/2020/invited-talks/#talk3
Gina Neff
Gina Neff
 
 
artificial unintelligence by broussard
 
algorithms of oppressions
 
automating inequality
 
 
 
 
​Chair: TBD
Abstract:

 

Recent advances in artificial intelligence applications have sparked scholarly and public attention to the challenges of the ethical design of technologies. These conversations about ethics have been targeted largely at technology designers and concerned with helping to inform building better and fairer AI tools and technologies. This approach, however, addresses only a small part of the problem of responsible use and will not be adequate for describing or redressing the problems that will arise as more types of AI technologies are more widely used.
Many of the tools being developed today have potentially enormous and historic impacts on how people work, how society organises, stores and distributes information, where and how people interact with one another, and how people’s work is valued and compensated. And yet, our ethical attention has looked at a fairly narrow range of questions about expanding the access to, fairness of, and accountability for existing tools. Instead, I argue that scholars should develop much broader questions of about the reconfiguration of societal power, for which AI technologies form a crucial component.
This talk will argue that AI ethics needs to expand its theoretical and methodological toolkit in order to move away from prioritizing notions of good design that privilege the work of good and ethical technology designers. Instead, using approaches from feminist theory, organization studies, and science and technology, I argue for expanding how we evaluate uses of AI. This approach begins with the assumption of socially informed technological affordances, or “imagined affordances” shaping how people understand and use technologies in practice. It also gives centrality to the power of social institutions for shaping technologies-in-practice.

Short Bio:

Professor Gina Neff is a Senior Research Fellow at the Oxford Internet Institute and the Department of Sociology at the University of Oxford. Science called her book, Self-Tracking, co-authored with Dawn Nafus (MIT Press, 2016), “excellent” and a reviewer in the New York Review of Books said it was “easily the best book I’ve come across on the subject—‘about the tremendous power given to already powerful corporations when people allow companies to peer into their lives through data.’” Her book about the rise of internet industries in New York City, Venture Labor: Work and the Burden of Risk in Innovative Industries (MIT Press, 2012), won the 2013 American Sociological Association’s Communication and Information Technologies Best Book Award. Her next book, Building Information: How teams, companies and industries make new technologies work is co-authored with Carrie Sturts Dossick, with whom she directed the Collaboration, Technology and Organizations Practices Lab at the University of Washington. A leader in the new area of “human-centred data science,” Professor Neff leads a new project on the organizational challenges companies face using AI for decision making.
She holds a Ph.D. in sociology from Columbia University, where she is a faculty affiliate at the Center on Organizational Innovation. Professor Neff has had fellowships at the British Academy, the Institute for Advanced Study and Princeton University’s Center for Information Technology Policy. Her writing for the general public appears in Wired, Slate and The Atlantic, among other outlets. As a member of the University of Oxford’s Innovation Forum, she advises the university’s entrepreneurship policies. She is the responsible technology advisor to GMG Ventures, a venture capital firm investing in digital news, media and entertainment companies. She is a strategic advisor on AI to the Women’s Forum for the Economy & Society and leads the Minderoo Foundation’s working group on responsible AI. She serves the steering committee for the Reuters Institute for the Study of Journalism, the advisory board of Data & Society and the academic council for AI Now, and is on the Royal Society’s high-level expert commission on online information.

목록 제목 날짜
392 밥 딜런, 한 대수, 그리고 김 민기 2025.06.26
391 인문 360 인터뷰 선망국의 시간 2024.03.04
390 지구의 미래 156 (프란치스코 교황과 대화) 2024.03.04
389 신윤경 컬럼 20231101 2024.04.25
388 손희정 <손상된 행성 에서 더 나은 파국을 상상하기> 2024.04.29
387 4월 말에 본 영화들 2024.04.29
386 스승의 날, 훈훈한 하자 동네 이야기 2024.05.09
385 쓰지는 않고 읽기만 한다 2024.05.05
384 조민아 글 대림절 2023.12.10
383 플라톤 아카데미 기획서- 조한이 묻다 2024.01.10
382 볼레로 2022.05.23
381 중딩 모임 이름은 <바람이 불어오는 곳> 2023.04.22
380 416 시민 대학 2022.06.07
379 오랫만에 여행 일지를 쓰다 2023.01.13
378 선흘 할머니 그림 창고 전시 이야기 마당 2023.12.11
377 수상 소감 file 2023.11.20
376 삶으로 다시 떠오르기 -톨레 2023.07.30
375 휴먼 카인드 2023.08.09
374 국가 민족 인종의 고통체 2023.07.30
373 올 여름도 멤모스 호수 file 2023.07.27
372 사교육걱정없는 세상 요즘 부모 연구소 강의 file 2023.11.11
371 아이의 고통체-톨레 2023.07.30
370 사랑하는 당신에게 (영화)- 상실과 애도에 관한 이야기 2023.07.27
369 와스프 지배의 공고화? <위어드> 출간 소식을 접하고 2023.08.06
368 브런치 북 출판 프로젝트를 보며 2023.07.30
367 노워리 기자단 20231130 2023.12.28
366 2024 지관서가 인문학 정기강연 file 2024.02.03
365 1월 23일 2023.01.24
364 책소개 금융자본주의의 폭력 2023.02.24
363 추석에 기원하는 글 2023.09.24
362 장자의 열번째 생일에 반사의 선물 2022.04.15
361 2011년 정재승 교수와 인터뷰 2023.12.28
360 조민아 < divine powerlessness> 2023.12.10
359 기후 책 2023.08.02
358 멤모스 레이크 27회 숲 속 록 앤 불르스 2023.08.06
357 호수는 그 자리에 그대로 file 2023.08.02
356 게으를 권리 2023.08.10
355 또문 1월 편지 2023.01.14
354 친절함, 호혜의 세계를 넓히려면 2023.07.30
353 기내 영화관 4편 2023.07.30