Field Trips Anywhere
CHO(HAN)Haejoang
Field Trips Anywhere
CHO(HAN)Haejoang

AI 실험을 잠시 멈추자는 공개서한

조한 2023.03.31 09:04 조회수 : 342

라이프 3.0을 썼던 맥스 태그마크가 주도하는 선언문. 
AI실험의 속도를 늦추자, 6개월이라도 숨고르고 상황 분석하면서 가자고.
석유 석탄보다 더 큰 돈이 달라붙어서 폭주하는 중이라 불가능할 것 같다고들 하지만 일단 꿈틀거리기라도 해야지.
 

AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1] and acknowledged by top AI labs.[2] As stated in the widely-endorsed Asilomar AI PrinciplesAdvanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.

Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now.

Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.[4] This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.

AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an "AI summer" in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society.[5]  We can do so here. Let's enjoy a long AI summer, not rush unprepared into a fall.

 
 
 
 
목록 제목 날짜
470 The Story of A Colonial Subject Who Remembers Through the Body 2025.05.08
469 언세대 문과대 110주년에 2025.05.01
468 공생적 상상력을 키우기-작아 인터뷰 글 2025.04.30
467 < 마르셀 모스의 『선물론』을 어떻게 읽을 것인가? > 2025.03.27
466 스마트폰 소송을 검토하다 2025.03.27
465 트럼프가 부정한 성별, 자연은 답을 알고 있다 2025.03.10
464 그들은 어떻게 세상의 중심이 되었는가? 2025.03.07
463 내편, 네편은 없다···‘거래’만 있을 뿐 2025.03.06
462 흑표범, 알 수 없는 것을 포용하게 해주세요 2025.03.02
461 _클레어 키건, 『이처럼 사소한 것들- 묵상 2025.03.02
460 책 사람과 북극 곰을 잇다- 지구의 날 기도문 2025.03.02
459 citizen, rebel, change agent & reformer 2025.02.15
458 하자 곁불 2025.02.04
457 yin MENT 인터뷰 질문 2025.02.04
456 지구와 사람 라투르 찬미 받으소서 2025.01.19
455 유물론에서 끌어낸 낯선 신학 2025.01.19
454 ‘죽은 물질 되살리는’ 신유물론 고명섭기자 2025.01.19
453 라투르 존재양식의 탐구 - 근대인의 인류학 2025.01.19
452 할망 미술관, 희망은 변방에서, 엄기호 2025.01.19
451 손희정- 그래서 시시했다/너무 고상한 죽음? room next door 2025.01.12
450 AI가 인간에게 보내는 편지 - 얼르는 버전 2025.01.12
449 AI가 인간에게 보내는 편지 2025.01.12
448 인간의 두려움 달래는 시 + 인간인척 하는 AI 2025.01.12
447 male frame female pictures 2025.01.05
446 감기 2024.12.30
445 걱정 드로잉과 재난 유토피아 file 2024.12.30
444 긴박했던 6시간, 내가 총구 앞에 2024.12.23
443 여가부 폐지에 맞서 싸우는 한국 여성들 2024.12.23
442 bbc 뉴스 상식적 사회면 좋겠다 2024.12.23
441 탄핵 투표 가장 먼저 국힘 김예지 2024.12.23
440 '탄핵안이 통과된 순간' 시민들의 반응은? 2024.12.23
439 BBC가 2024년 가장 눈길을 끈 12장의 이미지 2024.12.23
438 수력 문명, 그리고 플라넷 아쿠아 (리프킨) 2024.11.25
437 4. 3 영화제 2024.11.25
436 도덕적 우월감은 독약 (강준만) 2024.11.25
435 시 하나, 주문 하나 2024.11.25
434 돌봄이 이끄는 자리 추천의 글 2024.11.20
433 평창 예술마을 컨퍼런스 발표문 file 2024.11.16
432 오지랍의 정치학 2024.11.16
431 강원네트워크 2024.11.08