Weak-to-strong generalization
closed| Тип события | funding |
|---|---|
| Тема | ai safety |
| Организация | |
| Страна |
| Статей | 2 |
|---|---|
| Уник. источников | 1 |
| Важность / Момент | 0.94 / 0 |
| Период | 14.12.2023 00:00 — 14.12.2023 08:00 |
| Создан | 06.04.2026 05:59:46 |
Статьи в кластере 2
| Заголовок | Источник | Дата публикации | Score | |||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| S | Weak-to-strong generalization | openai | 14.12.2023 00:00 | 1 | ||||||||||||||||
We present a new research direction for superalignment, together with promising initial results: can we leverage the generalization properties of deep learning to control strong models with weak supervisors?
|
||||||||||||||||||||
| Superalignment Fast Grants | openai | 14.12.2023 08:00 | 0.71 | |||||||||||||||||
We’re launching $10M in grants to support technical research towards the alignment and safety of superhuman AI systems, including weak-to-strong generalization, interpretability, scalable oversight, and more.
|
||||||||||||||||||||