Field Trips Anywhere
Field Trips Anywhere

artificial intelligence, ethics and society 20200208

조한 2020.02.09 12:56 조회수 : 71

closing keynote by gina neff 

Fri 8th, 4:30 pm – 5:30 pm 20200208
Title: From Bad Users and Failed Uses to Responsible Technologies:A Call to Expand the AI Ethics Toolkit
Gina Neff (Oxford Internet Institute, University of Oxford.)
Gina Neff
Gina Neff
artificial unintelligence by broussard
algorithms of oppressions
automating inequality
​Chair: TBD


Recent advances in artificial intelligence applications have sparked scholarly and public attention to the challenges of the ethical design of technologies. These conversations about ethics have been targeted largely at technology designers and concerned with helping to inform building better and fairer AI tools and technologies. This approach, however, addresses only a small part of the problem of responsible use and will not be adequate for describing or redressing the problems that will arise as more types of AI technologies are more widely used.
Many of the tools being developed today have potentially enormous and historic impacts on how people work, how society organises, stores and distributes information, where and how people interact with one another, and how people’s work is valued and compensated. And yet, our ethical attention has looked at a fairly narrow range of questions about expanding the access to, fairness of, and accountability for existing tools. Instead, I argue that scholars should develop much broader questions of about the reconfiguration of societal power, for which AI technologies form a crucial component.
This talk will argue that AI ethics needs to expand its theoretical and methodological toolkit in order to move away from prioritizing notions of good design that privilege the work of good and ethical technology designers. Instead, using approaches from feminist theory, organization studies, and science and technology, I argue for expanding how we evaluate uses of AI. This approach begins with the assumption of socially informed technological affordances, or “imagined affordances” shaping how people understand and use technologies in practice. It also gives centrality to the power of social institutions for shaping technologies-in-practice.

Short Bio:

Professor Gina Neff is a Senior Research Fellow at the Oxford Internet Institute and the Department of Sociology at the University of Oxford. Science called her book, Self-Tracking, co-authored with Dawn Nafus (MIT Press, 2016), “excellent” and a reviewer in the New York Review of Books said it was “easily the best book I’ve come across on the subject—‘about the tremendous power given to already powerful corporations when people allow companies to peer into their lives through data.’” Her book about the rise of internet industries in New York City, Venture Labor: Work and the Burden of Risk in Innovative Industries (MIT Press, 2012), won the 2013 American Sociological Association’s Communication and Information Technologies Best Book Award. Her next book, Building Information: How teams, companies and industries make new technologies work is co-authored with Carrie Sturts Dossick, with whom she directed the Collaboration, Technology and Organizations Practices Lab at the University of Washington. A leader in the new area of “human-centred data science,” Professor Neff leads a new project on the organizational challenges companies face using AI for decision making.
She holds a Ph.D. in sociology from Columbia University, where she is a faculty affiliate at the Center on Organizational Innovation. Professor Neff has had fellowships at the British Academy, the Institute for Advanced Study and Princeton University’s Center for Information Technology Policy. Her writing for the general public appears in Wired, Slate and The Atlantic, among other outlets. As a member of the University of Oxford’s Innovation Forum, she advises the university’s entrepreneurship policies. She is the responsible technology advisor to GMG Ventures, a venture capital firm investing in digital news, media and entertainment companies. She is a strategic advisor on AI to the Women’s Forum for the Economy & Society and leads the Minderoo Foundation’s working group on responsible AI. She serves the steering committee for the Reuters Institute for the Study of Journalism, the advisory board of Data & Society and the academic council for AI Now, and is on the Royal Society’s high-level expert commission on online information.

목록 제목 날짜 조회수
151 맹자 이야기 1 secret 2020.02.28 0
150 봉준호 인터뷰 좋은 것 secret 2020.02.22 0
149 코로나 바이러스 관련 좋은 기사 2020.02.22 129
148 AI 관련 책 추천 2020.02.21 103
147 좋은 소식~ 기후 변화 정부 책임 세계 첫 판결 2020.02.21 156
146 트럼프지지자들이 리버럴을 미워하는 이유 2020.02.18 66
145 1차 난감 모임 회의록 secret 2020.02.12 1
144 다양성을 살리는 데 왜 실패했을까? secret 2020.02.10 0
» artificial intelligence, ethics and society 20200208 2020.02.09 71
142 코딩보다 알고리즘 secret 2020.02.08 0
141 새로 쓰는 결혼 이야기 secret 2020.02.08 0
140 이세돌의 은퇴 선언 secret 2020.02.08 0
139 나를 감싸고 있는 보호막 secret 2020.02.08 0
138 세월호 생일, 말해지지 않는 것 들으려 하라 secret 2020.02.08 0
137 인간, 너는 만물의 영장이 아니고 너는 특별하지 않다. secret 2020.02.08 0
136 작가가 된 조기현 아빠의 아빠 secret 2020.02.07 0
135 지코 (ZICO) - Balloon Official Music Video secret 2020.02.07 0
134 진화는 지속된다 secret 2020.02.07 0
133 컬럼 소재 코로나 관료화 안버림연구소 기후 학교 탈식민 임팩트 secret 2020.02.07 0
132 2010년 2월 11일 컬럼 - 반복 변주 secret 2020.02.07 1
131 정보 나눔이 어려운 자폐 사회 일화 secret 2020.02.06 0
130 지구 정부 secret 2020.02.06 0
129 [세상 읽기] 희망은 없다 / 신영전(한대 의대) 2020.02.06 241
128 In this life-Israel Kamakawiwo'ole 2020.02.05 48
127 하와이 알로하 2020.02.05 91
126 박테리아와 바이러스 secret 2020.02.05 0
125 툰베리 secret 2020.02.02 0
124 서울시 우리동네 키움센터 secret 2020.02.02 29
123 서귀포 문화 도시 2월 컬럼 secret 2020.01.31 0
122 넷플릭스 영화 secret 2020.01.31 0
121 아버지와 폴링 다운 secret 2020.01.31 0
120 AI 시대 문제해결력을 키우는 교육 secret 2020.01.31 1
119 turtle beach secret 2020.01.29 0
118 기후 변화 지역 정책 secret 2020.01.28 0
117 내가 제일 좋아하는 영화 중 하나 2020.01.28 96
116 중간지원조직 secret 2020.01.28 0
115 안토니아스 라인 영화평 등 secret 2020.01.28 0
114 기본소득과 기초자산 (사회적 경제연구소) 2020.01.28 114
113 달콤한 잠에 빠진 물개 file 2020.01.27 46
112 저급한 쾌락과 고양된 행복 secret 2020.01.27 0