Field Trips Anywhere
CHO(HAN)Haejoang
Field Trips Anywhere
CHO(HAN)Haejoang

artificial intelligence, ethics and society 20200208

조한 2020.02.09 12:56 조회수 : 243

closing keynote by gina neff 

Fri 8th, 4:30 pm – 5:30 pm 20200208
Title: From Bad Users and Failed Uses to Responsible Technologies:A Call to Expand the AI Ethics Toolkit
Gina Neff (Oxford Internet Institute, University of Oxford.)https://www.aies-conference.com/2020/invited-talks/#talk3
Gina Neff
Gina Neff
 
 
artificial unintelligence by broussard
 
algorithms of oppressions
 
automating inequality
 
 
 
 
​Chair: TBD
Abstract:

 

Recent advances in artificial intelligence applications have sparked scholarly and public attention to the challenges of the ethical design of technologies. These conversations about ethics have been targeted largely at technology designers and concerned with helping to inform building better and fairer AI tools and technologies. This approach, however, addresses only a small part of the problem of responsible use and will not be adequate for describing or redressing the problems that will arise as more types of AI technologies are more widely used.
Many of the tools being developed today have potentially enormous and historic impacts on how people work, how society organises, stores and distributes information, where and how people interact with one another, and how people’s work is valued and compensated. And yet, our ethical attention has looked at a fairly narrow range of questions about expanding the access to, fairness of, and accountability for existing tools. Instead, I argue that scholars should develop much broader questions of about the reconfiguration of societal power, for which AI technologies form a crucial component.
This talk will argue that AI ethics needs to expand its theoretical and methodological toolkit in order to move away from prioritizing notions of good design that privilege the work of good and ethical technology designers. Instead, using approaches from feminist theory, organization studies, and science and technology, I argue for expanding how we evaluate uses of AI. This approach begins with the assumption of socially informed technological affordances, or “imagined affordances” shaping how people understand and use technologies in practice. It also gives centrality to the power of social institutions for shaping technologies-in-practice.

Short Bio:

Professor Gina Neff is a Senior Research Fellow at the Oxford Internet Institute and the Department of Sociology at the University of Oxford. Science called her book, Self-Tracking, co-authored with Dawn Nafus (MIT Press, 2016), “excellent” and a reviewer in the New York Review of Books said it was “easily the best book I’ve come across on the subject—‘about the tremendous power given to already powerful corporations when people allow companies to peer into their lives through data.’” Her book about the rise of internet industries in New York City, Venture Labor: Work and the Burden of Risk in Innovative Industries (MIT Press, 2012), won the 2013 American Sociological Association’s Communication and Information Technologies Best Book Award. Her next book, Building Information: How teams, companies and industries make new technologies work is co-authored with Carrie Sturts Dossick, with whom she directed the Collaboration, Technology and Organizations Practices Lab at the University of Washington. A leader in the new area of “human-centred data science,” Professor Neff leads a new project on the organizational challenges companies face using AI for decision making.
She holds a Ph.D. in sociology from Columbia University, where she is a faculty affiliate at the Center on Organizational Innovation. Professor Neff has had fellowships at the British Academy, the Institute for Advanced Study and Princeton University’s Center for Information Technology Policy. Her writing for the general public appears in Wired, Slate and The Atlantic, among other outlets. As a member of the University of Oxford’s Innovation Forum, she advises the university’s entrepreneurship policies. She is the responsible technology advisor to GMG Ventures, a venture capital firm investing in digital news, media and entertainment companies. She is a strategic advisor on AI to the Women’s Forum for the Economy & Society and leads the Minderoo Foundation’s working group on responsible AI. She serves the steering committee for the Reuters Institute for the Study of Journalism, the advisory board of Data & Society and the academic council for AI Now, and is on the Royal Society’s high-level expert commission on online information.

목록 제목 날짜
161 세옹의 선물 2022.07.06
160 해러웨이 관련 좋은 글 2022.07.13
159 제주는 잘 진화해갈까? 제주 출신 지식인의 글 2022.07.13
158 발제 제목은 <망가진 행성에서 AI와 같이 살아가기> 정도로 2022.07.13
157 오랫만의 기내 극장에서 본 영화 세편 2022.07.13
156 아랫목에 버려졌다는 탄생신화 2022.07.18
155 맘모스 레이크 첫쨋날 2022.07.18
154 노희경의 기술, 겪어낸 것을 쓰는 삶의 기술 2022.07.19
153 맘모스 레이크 둘쨋날 file 2022.07.19
152 오늘의 주기도문 2022.07.19
151 맘모스 3일째 타운 트롤리 그리고 오래된 관계 file 2022.07.19
150 맘모스 4일째 file 2022.07.21
149 맘모스 5일째 file 2022.07.21
148 맘모스 6일째 file 2022.07.22
147 맘모스 7일째 file 2022.07.23
146 맘모스 9일째 레게 파티 file 2022.07.25
145 맘모스 10일째 크리스탈 레이크 file 2022.07.26
144 맘모스 11일째 트롤리 일주, 그리고 잼 세션 file 2022.07.29
143 맘모스 12일째 요세미티 행 file 2022.07.29
142 맘모스 13일째 스키 대신 자전거 file 2022.08.03
141 맘모스 14일째 금요일 록 크릭 대신 루비 레이크 file 2022.08.03
140 ageism '플랜 75' 여고 카톡에 오른 글 2022.08.04
139 맘모스 마지막 날 죄수들의 호수 file 2022.08.04
138 다시 천사의 도시 LA 첫쨋날 file 2022.08.04
137 8월 1일 LA 둘쨋날 월요일 file 2022.08.04
136 8월 2일 천사의 도시 둘쨋날 file 2022.08.05
135 8월 3일 LA 브렌트우드 집의 정원수와 풀들 file 2022.08.05
134 8월 4일 LA 네번째날 한국 소식 2022.08.05
133 8월 5일 LA 다섯번째 날 2022.08.05
132 8월 6일 LA 엿새째 file 2022.08.07
131 AI 시대 아이들 긴 원고 file 2022.09.12
130 9/18 아침 단상 <신들과 함께 AI와 함께 만물과 함께> 2022.09.18
129 우연성에 몸을 맡기는 것 2022.09.22
128 9월 17일 순자 삼춘 한글 공부 file 2022.09.22
127 AI 시대 문해력 ppt 수정 file 2022.10.04
126 조민아 컬럼 ghost dance 2022.11.02
125 애도를 추방하려는 사회- 4.16 재난 인문학 심포지움 (8년전) 2022.11.14
124 춘천 문화도시 기조강의 file 2022.11.14
123 엄기호 애도는 사회의 크기를 결정한다 2022.11.15
122 8년이 지난 세월호 이야기 file 2022.11.18