2026 год. AI внедрился глубже, чем предполагали ещё 5 лет назад. Этические дилеммы умножились. Кто отвечает за решения алгоритма? Могут ли иметь "права" нейросети? Где граница между человеком и машиной.
Ключевые проблемы
- Bias in algorithms
- Privacy invasion
- Job displacement
- Autonomous weapons
- Deepfakes и дезинформация
- AGI существование вопрос
Generative AI
- ChatGPT (OpenAI), Claude (Anthropic), Gemini (Google)
- Midjourney, DALL-E art
- Sora video generation
- Music generation (Suno, Udio)
- Ubiquitous в 2026
Deepfakes
- Undistinguishable from real
- Political manipulation
- Non-consensual porn — эпидемия
- Legal responses slow
- Detection tools race
Election disinformation
- 2024 election год: US, India, EU
- AI-generated content widespread
- Trust в media crumbling
- Fact-checking overwhelmed
Job displacement
- Goldman Sachs: 300M jobs affected globally
- Call centers, content creators, lawyers, accountants
- New jobs created too
- Transition painful
UBI обсуждение
- Universal Basic Income as response
- Sam Altman proposes
- Unclear funding
- Experiments limited so far
Artist concerns
- Training data: artists' work used без согласия
- Copyright lawsuits (Getty v. Stability AI)
- Hollywood strikes 2023
- Voice actors strike 2024
Autonomous vehicles
- Waymo в Сан-Франциско
- Tesla Full Self-Driving
- Trolley problem implications
- Lives saved vs lives lost
AI in medicine
- Diagnostic AI outperforms humans в multiple domains
- Drug discovery accelerates
- Ethical: who takes responsibility
- Equity concerns
Surveillance
- Facial recognition public spaces
- Predictive policing biases
- Workplace monitoring
- Privacy rights в ретрите
AI правительствах
- US: executive orders 2023
- EU AI Act 2024: комплексный
- China: state-directed
- Global coordination слабая
EU AI Act
- Risk-based approach
- Unacceptable risk banned (social scoring)
- High risk regulated (CV, hiring)
- Transparent disclosure
- Fines up to 7% global revenue
OpenAI drama
- Sam Altman ouster / reinstatement (ноябрь 2023)
- Board concerns over safety
- Ilya Sutskever departure 2024
- Q* rumors
- Super alignment team disbanded
Safety teams
- Anthropic: constitutional AI
- OpenAI: safety (disputed commitment)
- DeepMind: Google
- Academic (AI Safety Institute)
AGI debate
- Artificial General Intelligence
- Timeline estimates: 2030-2100
- Optimists: Altman, Kurzweil
- Skeptics: many researchers
- P(doom) — новая риторика
X-risk
- Existential risk
- Nick Bostrom, "Superintelligence"
- Yudkowsky: "probably we die"
- Counterarguments exist
Alignment problem
- How to ensure AI goals = human goals
- Unsolved
- Hundreds researching
- Constitutional AI, RLHF approaches
AI rights?
- Lamoine fired by Google 2022 — "LaMDA sentient"
- Anthropic hires "model welfare" researcher
- Unusual position unhoused
- Philosophers debate
Synthetic companions
- Replika, Character.AI
- AI friends/partners
- Psychological effects
- "Parasocial" relationships amplified
Loneliness and AI
- Filling gap для одиноких
- But possibly worsening
- Debate: helpful or harmful
- Japan: AI companions growing
AI в творчестве
- Writer's block? Use Claude
- Artist использует Midjourney
- Composer Suno
- Где ends creativity, begins automation
Ghost-writing AI
- Academic papers
- Corporate communications
- Fiction
- Students use everywhere
- Educational implications
Education response
- Some schools ban AI
- Others embrace и teach to use
- Curriculum rethinking
- Skills shift
Military AI
- Autonomous drones
- Israel Gospel (AI targeting)
- Lethal Autonomous Weapons Systems (LAWS) debate
- UN treaty discussions
Datasets ethics
- Training data scraped без consent
- Bias in training
- Representation problems
- Data labeling labor conditions
Global South implications
- Data labeling often in Kenya, Philippines
- Low wages
- Traumatic content moderation
- Colonial patterns
Open source vs closed
- Meta's LLaMA — open-ish
- OpenAI, Anthropic — closed
- Open source safety risks
- Access democratization
China AI
- DeepSeek, Alibaba, Baidu
- State priority
- Less safety focus reported
- Global competition
Россия AI
- GigaChat (Sber), YandexGPT
- Behind US/China
- Sanctions effects
- Some strong talent remained
Philosophical questions
- Consciousness in machines?
- Free will?
- Human exceptionalism?
- Age-old questions revisited
Пессимизм vs оптимизм
- Doomers: AI kills us all
- Accelerationists: AI fixes everything
- Moderates: careful development
- Uncertainty huge
Кому AI помогает
- Специалистам: x2-3 productivity
- Disabled: accessibility
- Researchers: analysis faster
- Small businesses: scaling
Что делать гражданину
- Understand tools
- Maintain critical thinking
- Engage политически
- Demand regulation
- Preserve human skills
Будущее
- 2026-2030: transformation accelerates
- Regulatory catches up slowly
- Society restructures
- Outcome uncertain
- Human agency remains crucial
Письма читателей
Обсуждение номера.0
Слово читателю
Хотите ответить автору? Подписка даёт право голоса.
Загрузка комментариев…