Field Trips Anywhere
CHO(HAN)Haejoang
Field Trips Anywhere
CHO(HAN)Haejoang

artificial intelligence, ethics and society 20200208

조한 2020.02.09 12:56 조회수 : 1111

closing keynote by gina neff 

Fri 8th, 4:30 pm – 5:30 pm 20200208
Title: From Bad Users and Failed Uses to Responsible Technologies:A Call to Expand the AI Ethics Toolkit
Gina Neff (Oxford Internet Institute, University of Oxford.)https://www.aies-conference.com/2020/invited-talks/#talk3
Gina Neff
Gina Neff
 
 
artificial unintelligence by broussard
 
algorithms of oppressions
 
automating inequality
 
 
 
 
​Chair: TBD
Abstract:

 

Recent advances in artificial intelligence applications have sparked scholarly and public attention to the challenges of the ethical design of technologies. These conversations about ethics have been targeted largely at technology designers and concerned with helping to inform building better and fairer AI tools and technologies. This approach, however, addresses only a small part of the problem of responsible use and will not be adequate for describing or redressing the problems that will arise as more types of AI technologies are more widely used.
Many of the tools being developed today have potentially enormous and historic impacts on how people work, how society organises, stores and distributes information, where and how people interact with one another, and how people’s work is valued and compensated. And yet, our ethical attention has looked at a fairly narrow range of questions about expanding the access to, fairness of, and accountability for existing tools. Instead, I argue that scholars should develop much broader questions of about the reconfiguration of societal power, for which AI technologies form a crucial component.
This talk will argue that AI ethics needs to expand its theoretical and methodological toolkit in order to move away from prioritizing notions of good design that privilege the work of good and ethical technology designers. Instead, using approaches from feminist theory, organization studies, and science and technology, I argue for expanding how we evaluate uses of AI. This approach begins with the assumption of socially informed technological affordances, or “imagined affordances” shaping how people understand and use technologies in practice. It also gives centrality to the power of social institutions for shaping technologies-in-practice.

Short Bio:

Professor Gina Neff is a Senior Research Fellow at the Oxford Internet Institute and the Department of Sociology at the University of Oxford. Science called her book, Self-Tracking, co-authored with Dawn Nafus (MIT Press, 2016), “excellent” and a reviewer in the New York Review of Books said it was “easily the best book I’ve come across on the subject—‘about the tremendous power given to already powerful corporations when people allow companies to peer into their lives through data.’” Her book about the rise of internet industries in New York City, Venture Labor: Work and the Burden of Risk in Innovative Industries (MIT Press, 2012), won the 2013 American Sociological Association’s Communication and Information Technologies Best Book Award. Her next book, Building Information: How teams, companies and industries make new technologies work is co-authored with Carrie Sturts Dossick, with whom she directed the Collaboration, Technology and Organizations Practices Lab at the University of Washington. A leader in the new area of “human-centred data science,” Professor Neff leads a new project on the organizational challenges companies face using AI for decision making.
She holds a Ph.D. in sociology from Columbia University, where she is a faculty affiliate at the Center on Organizational Innovation. Professor Neff has had fellowships at the British Academy, the Institute for Advanced Study and Princeton University’s Center for Information Technology Policy. Her writing for the general public appears in Wired, Slate and The Atlantic, among other outlets. As a member of the University of Oxford’s Innovation Forum, she advises the university’s entrepreneurship policies. She is the responsible technology advisor to GMG Ventures, a venture capital firm investing in digital news, media and entertainment companies. She is a strategic advisor on AI to the Women’s Forum for the Economy & Society and leads the Minderoo Foundation’s working group on responsible AI. She serves the steering committee for the Reuters Institute for the Study of Journalism, the advisory board of Data & Society and the academic council for AI Now, and is on the Royal Society’s high-level expert commission on online information.

목록 제목 날짜
230 요즘 드라마 보는 재미 2022.05.29
229 제주 돌문화 공원 즉흥 춤 축제 7회 file 2022.05.23
228 볼레로 2022.05.23
227 제 7회 국제 제주 즉흥춤 축제 file 2022.05.23
226 홈 스쿨링이 자연스러운 사람들 2022.05.23
225 신 없는 세계에서 목적 찾기 2022.05.23
224 왜 지금 마을과 작은 학교를 이야기하는가? (춘천 마을 이야기) 2022.05.16
223 팬데믹 3년이 남긴 질문: 교육공간 (작은 것이 아름답다 원고) file 2022.05.16
222 우리 동네 어록 : 잡초는 없다 2022.04.18
221 재난이 파국이 아니라 2022.04.17
220 채사장 우리는 언젠가 만난다 2017 2022.04.17
219 김소영 어린이라는 세계 2020 사계절 2022.04.17
218 다정소감 김혼비 2021 안온 2022.04.17
217 우리 할머니는 예술가 2022.04.17
216 장자의 열번째 생일에 반사의 선물 2022.04.15
215 머물며 그리고 환대하라 file 2022.04.13
214 기운 나는 30분- 장자의 줌 영어 공부 2022.03.28
213 3/28 아침 독서 한겨레 21 창간 28돌 기념 특별본 2022.03.28
212 3/28 추천글 쓰기의 기쁨 2022.03.28
211 데자뷰- 국민국가의 정치권력 2022.03.27
210 3월 20일 동인지 모임 : '모녀/모성' 또는 '나를 살게 하는 것' file 2022.03.21
209 3/19 김홍중 세미나 - 에밀 뒤르껭과 가브리엘 타르드 2022.03.19
208 재난의 시대, 교육의 방향을 다시 묻다. 2022.03.19
207 3/12 토요일 오디세이 학교 수업 2022.03.15
206 김영옥 흰머리 휘날리며 2022.03.05
205 폭군 아버지, 히스테리 엄마 책소개 2022.03.05
204 <모녀의 세계>, 그리고 <폭군 아버지, 히스테리 엄마> 2022.03.05
203 our souls at night 밤에 우리의 영혼은 2022.03.05
202 오늘의 사자 소학 2022.02.28
201 기쁨의 실천 0228 나무 심고 수다 떨고 2022.02.28
200 장자의 마음 "나를 믿기로 했다." 빈둥빈둥 2022.02.17
199 슬기로운 좌파 생활 깔끔한 책소개 2022.02.10
198 댓글 지면, 어떤 순기능을 하는걸까 2022.02.10
197 할머니들의 기후 행동- 동네 공원에서 놀기 2022.02.10
196 남성 중심 문명 그 이후 (슬기로운 좌파 생활 서평) 2022.02.01
195 토마 피케티 : 21세기 자본, 그리고 사회주의 시급하다 2022.01.30
194 피케티의 <21세기 자본>에 대한 하비의 마음 2022.01.30
193 사피엔스 번식의 에이스 카드는 외할머니 2022.01.30
192 10만년 전 사건, 공감능력의 출현과 협동 번식 (허디) 2022.01.05
191 협동 번식과 모계사회 2022.01.01