artificial intelligence, ethics and society 20200208
closing keynote by gina neff
Gina Neff (Oxford Internet Institute, University of Oxford.)https://www.aies-conference.com/2020/invited-talks/#talk3

Recent advances in artificial intelligence applications have sparked scholarly and public attention to the challenges of the ethical design of technologies. These conversations about ethics have been targeted largely at technology designers and concerned with helping to inform building better and fairer AI tools and technologies. This approach, however, addresses only a small part of the problem of responsible use and will not be adequate for describing or redressing the problems that will arise as more types of AI technologies are more widely used.
Many of the tools being developed today have potentially enormous and historic impacts on how people work, how society organises, stores and distributes information, where and how people interact with one another, and how people’s work is valued and compensated. And yet, our ethical attention has looked at a fairly narrow range of questions about expanding the access to, fairness of, and accountability for existing tools. Instead, I argue that scholars should develop much broader questions of about the reconfiguration of societal power, for which AI technologies form a crucial component.
This talk will argue that AI ethics needs to expand its theoretical and methodological toolkit in order to move away from prioritizing notions of good design that privilege the work of good and ethical technology designers. Instead, using approaches from feminist theory, organization studies, and science and technology, I argue for expanding how we evaluate uses of AI. This approach begins with the assumption of socially informed technological affordances, or “imagined affordances” shaping how people understand and use technologies in practice. It also gives centrality to the power of social institutions for shaping technologies-in-practice.
Professor Gina Neff is a Senior Research Fellow at the Oxford Internet Institute and the Department of Sociology at the University of Oxford. Science called her book, Self-Tracking, co-authored with Dawn Nafus (MIT Press, 2016), “excellent” and a reviewer in the New York Review of Books said it was “easily the best book I’ve come across on the subject—‘about the tremendous power given to already powerful corporations when people allow companies to peer into their lives through data.’” Her book about the rise of internet industries in New York City, Venture Labor: Work and the Burden of Risk in Innovative Industries (MIT Press, 2012), won the 2013 American Sociological Association’s Communication and Information Technologies Best Book Award. Her next book, Building Information: How teams, companies and industries make new technologies work is co-authored with Carrie Sturts Dossick, with whom she directed the Collaboration, Technology and Organizations Practices Lab at the University of Washington. A leader in the new area of “human-centred data science,” Professor Neff leads a new project on the organizational challenges companies face using AI for decision making.
She holds a Ph.D. in sociology from Columbia University, where she is a faculty affiliate at the Center on Organizational Innovation. Professor Neff has had fellowships at the British Academy, the Institute for Advanced Study and Princeton University’s Center for Information Technology Policy. Her writing for the general public appears in Wired, Slate and The Atlantic, among other outlets. As a member of the University of Oxford’s Innovation Forum, she advises the university’s entrepreneurship policies. She is the responsible technology advisor to GMG Ventures, a venture capital firm investing in digital news, media and entertainment companies. She is a strategic advisor on AI to the Women’s Forum for the Economy & Society and leads the Minderoo Foundation’s working group on responsible AI. She serves the steering committee for the Reuters Institute for the Study of Journalism, the advisory board of Data & Society and the academic council for AI Now, and is on the Royal Society’s high-level expert commission on online information.
목록 | 제목 | 날짜 |
---|---|---|
312 | 정당성 위기에서 재정 위기로 | 2023.02.24 |
311 | 자본주의와 민주주의는 동행 가능한가 -슈트렉 서평 | 2023.02.24 |
310 | 책소개 금융자본주의의 폭력 | 2023.02.24 |
309 |
수주 박형규 목사 탈춤 예수전
![]() | 2023.02.11 |
308 | 2월 번개 영화관 <안녕, 소중한 사람> | 2023.02.11 |
307 | 상큼한 컬럼 하나 <상냥함에 물들기> | 2023.02.10 |
306 | 탐라도서관 3월 강의 주제 | 2023.02.01 |
305 |
10세 두명을 위한 인문학 실험 교실
![]() | 2023.02.01 |
304 |
3주간 그린 그림 정리
![]() | 2023.02.01 |
303 | 작심 3일로 끝난 일기, M의 일기로 대체 | 2023.02.01 |
302 | 1월 23일 | 2023.01.24 |
301 | 1월 21 토, 22 일 | 2023.01.24 |
300 | 16일에서 20일 | 2023.01.24 |
299 |
사피엔스 캠프 2- 소년의 성년
![]() | 2023.01.20 |
298 | 한강의 <작별> | 2023.01.19 |
297 | 권력과 사랑에 대하여 -조민아 책에서 | 2023.01.18 |
296 | 코올리나 13에서 15일 | 2023.01.16 |
295 | 코올리나 일지 둘쨋날 | 2023.01.15 |
294 |
글방 전성시대 (어딘 김현아)
![]() | 2023.01.14 |
293 | 또문의 새해, 부지런한 글쓰기 | 2023.01.14 |
292 | 답신 조한 | 2023.01.14 |
291 | 또문 1월 편지 | 2023.01.14 |
290 | Dall-e가 그린 시니어 페미니스트 그림 | 2023.01.14 |
289 | 오랫만에 여행 일지를 쓰다 | 2023.01.13 |
288 |
사피엔스 캠프 1 :마음을 찾아
![]() | 2022.12.22 |
287 |
부산건강 마을센터 성과공유회
![]() | 2022.12.12 |
286 | 문학이라는 사나운 팔자와의 동거 | 2022.12.04 |
285 | 비판적 작가의 재발견- 오웰의 장미 | 2022.12.04 |
284 | 존엄사에 관한 영화- 잘 죽는다는 것 | 2022.12.01 |
283 |
도서관 연합회 길위의 인문학 마무리 특강
![]() | 2022.12.01 |
282 |
부산 마을건강센터
![]() | 2022.11.23 |
281 | 11월 번개 영화관 | 2022.11.19 |
280 |
8년이 지난 세월호 이야기
![]() | 2022.11.18 |
279 | 엄기호 애도는 사회의 크기를 결정한다 | 2022.11.15 |
278 |
춘천 문화도시 기조강의
![]() | 2022.11.14 |
277 | 애도를 추방하려는 사회- 4.16 재난 인문학 심포지움 (8년전) | 2022.11.14 |
276 | 조민아 컬럼 ghost dance | 2022.11.02 |
275 |
AI 시대 문해력 ppt 수정
![]() | 2022.10.04 |
274 |
9월 17일 순자 삼춘 한글 공부
![]() | 2022.09.22 |
273 | 우연성에 몸을 맡기는 것 | 2022.09.22 |