Human Combo Generated Leaderboard: 7 day

  • Ranking: The position of the model in the leaderboard as ordered by Overall Score
  • Organization: The group responsible for the model or forecasts
  • Model: The LLM model & prompt info or the human group and forecast aggregation method
    • zero shot: used a zero-shot prompt
    • scratchpad: used a scratchpad prompt with instructions that outline a procedure the model should use to reason about the question
    • with freeze values: means that, for questions from market sources, the prompt was supplemented with the aggregate human forecast from the relevant platform on the day the question set was generated
    • with news: means that the prompt was supplemented with relevant news summaries obtained through an automated process
  • Dataset Score: The average Brier score across all questions sourced from datasets
  • Market Score (resolved): The average Brier score across all resolved questions sourced from prediction markets and forecast aggregation platforms
  • Market Score (unresolved): The average Brier score across all unresolved questions sourced from prediction markets and forecast aggregation platforms
  • Market Score (overall): The average Brier score across all questions sourced from prediction markets and forecast aggregation platforms
  • Overall Resolved Score: The average of the Dataset Score and the Market Score (resolved) columns
  • Overall Score: The average of the Dataset Score and the Market Score (overall) columns
  • Overall Score 95% CI: The 95% confidence interval for the Overall Score
  • Pairwise p-value comparing to No. 1 (bootstrapped): The p-value calculated by bootstrapping the differences in overall score between each model and the best forecaster (the group with rank 1) under the null hypothesis that there's no difference.
  • Pct. more accurate than No. 1: The percent of questions where this forecaster had a better overall score than the best forecaster (with rank 1)
  • Pct. imputed: The percent of questions for which this forecaster did not provide a forecast and hence had a forecast value imputed (0.5 for dataset questions and the aggregate human forecast on the forecast due date for questions sourced from prediction markets or forecast aggregation platforms)
Ranking Organization Model Dataset Score (N=428) Market Score (resolved) (N=26) Market Score (unresolved) (N=275) Market Score (overall) (N=301) Overall Resolved Score (N=454) Overall Score (N=729) Overall Score 95% CI Pairwise p-value comparing to No. 1 (bootstrapped) Pct. more accurate than No. 1 Pct. Imputed
1 ForecastBench Superforecaster median forecast 0.099 0.064 0.037 0.040 0.081 0.069 [0.059, 0.08] 0% 0%
2 OpenAI GPT-4 (zero shot with freeze values) 0.126 0.159 0.030 0.041 0.142 0.083 [0.073, 0.094] 0.001 34% 0%
3 ForecastBench Public median forecast 0.130 0.137 0.029 0.039 0.133 0.084 [0.074, 0.095] <0.001 27% 0%
4 OpenAI GPT-4-Turbo-2024-04-09 (zero shot with freeze values) 0.133 0.159 0.029 0.040 0.146 0.086 [0.076, 0.097] <0.001 33% 0%
5 Anthropic Claude-3-5-Sonnet-20240620 (zero shot with freeze values) 0.112 0.273 0.042 0.062 0.192 0.087 [0.074, 0.1] 0.002 34% 0%
6 OpenAI GPT-4o (scratchpad with freeze values) 0.129 0.158 0.042 0.052 0.144 0.091 [0.08, 0.101] <0.001 29% 0%
7 Anthropic Claude-3-5-Sonnet-20240620 (scratchpad with freeze values) 0.119 0.201 0.051 0.064 0.160 0.092 [0.079, 0.104] <0.001 28% 0%
8 Anthropic Claude-3-5-Sonnet-20240620 (scratchpad with news with freeze values) 0.129 0.210 0.046 0.060 0.169 0.095 [0.083, 0.106] <0.001 28% 0%
9 Mistral AI Mistral-Large-Latest (zero shot with freeze values) 0.127 0.171 0.051 0.062 0.149 0.095 [0.083, 0.106] <0.001 25% 0%
10 OpenAI GPT-4o (scratchpad with news with freeze values) 0.142 0.103 0.044 0.049 0.122 0.095 [0.084, 0.106] <0.001 28% 0%
11 Mistral AI Mistral-Large-Latest (scratchpad with freeze values) 0.131 0.151 0.060 0.068 0.141 0.099 [0.089, 0.109] <0.001 23% 0%
12 OpenAI GPT-4o (scratchpad) 0.129 0.192 0.062 0.073 0.160 0.101 [0.09, 0.112] <0.001 24% 0%
13 Anthropic Claude-3-Opus-20240229 (zero shot with freeze values) 0.145 0.173 0.047 0.057 0.159 0.101 [0.089, 0.114] <0.001 31% 0%
14 Meta Llama-3-70b-Chat-Hf (zero shot with freeze values) 0.136 0.156 0.059 0.067 0.146 0.101 [0.089, 0.113] <0.001 27% 0%
15 Anthropic Claude-2.1 (scratchpad with freeze values) 0.162 0.033 0.043 0.042 0.098 0.102 [0.092, 0.112] <0.001 29% 28%
16 Anthropic Claude-3-5-Sonnet-20240620 (scratchpad) 0.119 0.252 0.070 0.086 0.186 0.102 [0.089, 0.115] <0.001 25% 0%
17 Anthropic Claude-3-5-Sonnet-20240620 (superforecaster with news 3) 0.134 0.196 0.062 0.074 0.165 0.104 [0.092, 0.116] <0.001 25% 0%
18 OpenAI GPT-4 (scratchpad with freeze values) 0.147 0.161 0.051 0.061 0.154 0.104 [0.093, 0.114] <0.001 26% 0%
19 Qwen Qwen1.5-110B-Chat (zero shot with freeze values) 0.152 0.141 0.048 0.056 0.146 0.104 [0.093, 0.115] <0.001 27% 0%
20 Anthropic Claude-3-Opus-20240229 (scratchpad with freeze values) 0.132 0.158 0.068 0.076 0.145 0.104 [0.093, 0.115] <0.001 24% 0%
21 Meta Llama-3-70b-Chat-Hf (scratchpad with freeze values) 0.146 0.129 0.058 0.064 0.137 0.105 [0.095, 0.115] <0.001 22% 0%
22 Anthropic Claude-3-5-Sonnet-20240620 (scratchpad with news) 0.129 0.229 0.070 0.083 0.179 0.106 [0.093, 0.119] <0.001 24% 0%
23 Anthropic Claude-3-5-Sonnet-20240620 (zero shot) 0.112 0.295 0.082 0.101 0.203 0.106 [0.092, 0.12] <0.001 26% 0%
24 Anthropic Claude-2.1 (scratchpad) 0.162 0.060 0.051 0.052 0.111 0.107 [0.097, 0.117] <0.001 27% 28%
25 Mistral AI Mixtral-8x22B-Instruct-V0.1 (zero shot with freeze values) 0.150 0.160 0.057 0.066 0.155 0.108 [0.095, 0.121] <0.001 29% 0%
26 Google Gemini-1.5-Flash (scratchpad with freeze values) 0.150 0.157 0.061 0.069 0.154 0.110 [0.098, 0.122] <0.001 26% 0%
27 OpenAI GPT-4o (zero shot with freeze values) 0.164 0.160 0.045 0.055 0.162 0.110 [0.097, 0.123] <0.001 30% 0%
28 Google Gemini-1.5-Pro (scratchpad with freeze values) 0.136 0.199 0.073 0.084 0.167 0.110 [0.099, 0.121] <0.001 23% 0%
29 Anthropic Claude-3-Opus-20240229 (scratchpad) 0.132 0.186 0.079 0.088 0.159 0.110 [0.098, 0.122] <0.001 24% 0%
30 OpenAI GPT-4 (zero shot) 0.126 0.156 0.089 0.095 0.141 0.110 [0.099, 0.121] <0.001 24% 0%
31 OpenAI GPT-4o (scratchpad with news) 0.142 0.182 0.070 0.080 0.162 0.111 [0.098, 0.123] <0.001 24% 0%
32 OpenAI GPT-4-Turbo-2024-04-09 (zero shot) 0.133 0.236 0.075 0.089 0.184 0.111 [0.099, 0.123] <0.001 21% 0%
33 Google Gemini-1.5-Pro (zero shot with freeze values) 0.154 0.178 0.058 0.068 0.166 0.111 [0.097, 0.124] <0.001 28% 0%
34 Anthropic Claude-3-5-Sonnet-20240620 (superforecaster with news 1) 0.138 0.241 0.070 0.085 0.190 0.111 [0.099, 0.124] <0.001 24% 0%
35 Mistral AI Mixtral-8x22B-Instruct-V0.1 (scratchpad with freeze values) 0.146 0.162 0.069 0.077 0.154 0.112 [0.101, 0.123] <0.001 23% 0%
36 OpenAI GPT-4-Turbo-2024-04-09 (scratchpad with freeze values) 0.152 0.148 0.065 0.072 0.150 0.112 [0.1, 0.124] <0.001 26% 0%
37 Google Gemini-1.5-Flash (zero shot with freeze values) 0.147 0.183 0.068 0.078 0.165 0.113 [0.098, 0.127] <0.001 27% 0%
38 OpenAI GPT-4 (scratchpad) 0.147 0.153 0.072 0.079 0.150 0.113 [0.103, 0.123] <0.001 22% 0%
39 Google Gemini-1.5-Pro (scratchpad) 0.136 0.209 0.079 0.090 0.173 0.113 [0.102, 0.125] <0.001 23% 0%
40 Mistral AI Mistral-Large-Latest (scratchpad) 0.131 0.224 0.085 0.097 0.177 0.114 [0.103, 0.125] <0.001 23% 0%
41 Google Gemini-1.5-Pro (scratchpad with news with freeze values) 0.144 0.181 0.075 0.084 0.163 0.114 [0.102, 0.126] <0.001 21% 0%
42 ForecastBench Imputed Forecaster 0.203 0.058 0.026 0.029 0.130 0.116 [0.105, 0.127] <0.001 32% 100%
43 ForecastBench LLM Crowd (gpt-4o, claude-3.5-sonnet, gemini-1.5-pro) with news 0.155 0.203 0.065 0.077 0.179 0.116 [0.104, 0.128] <0.001 24% 0%
44 Anthropic Claude-2.1 (zero shot with freeze values) 0.177 0.159 0.046 0.055 0.168 0.116 [0.105, 0.128] <0.001 30% 0%
45 ForecastBench LLM Crowd (gpt-4o, claude-3.5-sonnet, gemini-1.5-pro) with news 0.158 0.195 0.065 0.077 0.176 0.117 [0.105, 0.129] <0.001 25% 0%
46 OpenAI GPT-4-Turbo-2024-04-09 (scratchpad with news with freeze values) 0.159 0.147 0.069 0.075 0.153 0.117 [0.104, 0.13] <0.001 27% 1%
47 OpenAI GPT-4-Turbo-2024-04-09 (scratchpad) 0.152 0.202 0.071 0.082 0.177 0.117 [0.106, 0.129] <0.001 21% 0%
48 ForecastBench LLM Crowd (gpt-4o, claude-3.5-sonnet, gemini-1.5-pro) with news 0.157 0.202 0.067 0.079 0.179 0.118 [0.106, 0.13] <0.001 24% 0%
49 Google Gemini-1.5-Pro (scratchpad with news) 0.144 0.175 0.083 0.091 0.160 0.118 [0.106, 0.13] <0.001 23% 0%
50 Meta Llama-3-70b-Chat-Hf (zero shot) 0.136 0.199 0.092 0.102 0.167 0.119 [0.107, 0.13] <0.001 23% 0%
51 Anthropic Claude-3-Opus-20240229 (zero shot) 0.145 0.220 0.081 0.093 0.183 0.119 [0.106, 0.132] <0.001 25% 0%
52 Anthropic Claude-3-Opus-20240229 (superforecaster with news 1) 0.132 0.244 0.093 0.106 0.188 0.119 [0.107, 0.131] <0.001 24% 0%
53 Mistral AI Mistral-Large-Latest (zero shot) 0.127 0.187 0.104 0.112 0.157 0.119 [0.107, 0.132] <0.001 21% 0%
54 Meta Llama-3-8b-Chat-Hf (zero shot with freeze values) 0.163 0.209 0.064 0.077 0.186 0.120 [0.106, 0.133] <0.001 28% 0%
55 Qwen Qwen1.5-110B-Chat (scratchpad with news with freeze values) 0.154 0.172 0.078 0.086 0.163 0.120 [0.108, 0.132] <0.001 22% 0%
56 Google Gemini-1.5-Flash (scratchpad) 0.150 0.193 0.080 0.090 0.172 0.120 [0.108, 0.132] <0.001 23% 0%
57 Mistral AI Mixtral-8x22B-Instruct-V0.1 (scratchpad) 0.146 0.204 0.086 0.096 0.175 0.121 [0.11, 0.133] <0.001 22% 0%
58 Qwen Qwen1.5-110B-Chat (scratchpad with freeze values) 0.159 0.184 0.074 0.084 0.172 0.122 [0.11, 0.133] <0.001 22% 0%
59 Qwen Qwen1.5-110B-Chat (zero shot) 0.152 0.173 0.085 0.092 0.162 0.122 [0.11, 0.134] <0.001 20% 3%
60 Anthropic Claude-3-5-Sonnet-20240620 (superforecaster with news 2) 0.154 0.208 0.079 0.091 0.181 0.123 [0.11, 0.135] <0.001 24% 0%
61 Google Gemini-1.5-Flash (scratchpad with news with freeze values) 0.159 0.187 0.077 0.087 0.173 0.123 [0.11, 0.135] <0.001 23% 0%
62 Mistral AI Mixtral-8x7B-Instruct-V0.1 (scratchpad) 0.163 0.096 0.082 0.084 0.129 0.123 [0.112, 0.135] <0.001 26% 23%
63 OpenAI GPT-4o (superforecaster with news 3) 0.168 0.160 0.073 0.080 0.164 0.124 [0.112, 0.136] <0.001 23% 0%
64 Anthropic Claude-2.1 (scratchpad with news with freeze values) 0.169 0.121 0.076 0.080 0.145 0.124 [0.113, 0.136] <0.001 24% 4%
65 Google Gemini-1.5-Pro (superforecaster with news 3) 0.158 0.175 0.083 0.091 0.166 0.124 [0.112, 0.137] <0.001 24% 0%
66 OpenAI GPT-4-Turbo-2024-04-09 (superforecaster with news 3) 0.164 0.155 0.078 0.085 0.160 0.125 [0.113, 0.136] <0.001 24% 0%
67 OpenAI GPT-4o (zero shot) 0.164 0.229 0.071 0.085 0.197 0.125 [0.112, 0.138] <0.001 22% 0%
68 OpenAI GPT-4-Turbo-2024-04-09 (scratchpad with news) 0.159 0.233 0.078 0.091 0.196 0.125 [0.112, 0.138] <0.001 25% 0%
69 Anthropic Claude-2.1 (scratchpad with news) 0.169 0.186 0.071 0.081 0.177 0.125 [0.113, 0.137] <0.001 25% 13%
70 Qwen Qwen1.5-110B-Chat (superforecaster with news 1) 0.152 0.285 0.081 0.098 0.219 0.125 [0.112, 0.138] <0.001 22% 0%
71 Mistral AI Mixtral-8x22B-Instruct-V0.1 (scratchpad with news with freeze values) 0.161 0.161 0.083 0.090 0.161 0.125 [0.113, 0.137] <0.001 22% 0%
72 Mistral AI Mixtral-8x7B-Instruct-V0.1 (zero shot with freeze values) 0.167 0.172 0.076 0.084 0.169 0.126 [0.109, 0.142] <0.001 31% 0%
73 Qwen Qwen1.5-110B-Chat (scratchpad with news) 0.154 0.173 0.091 0.098 0.163 0.126 [0.114, 0.138] <0.001 21% 0%
74 Meta Llama-3-70b-Chat-Hf (scratchpad) 0.146 0.209 0.098 0.107 0.178 0.127 [0.115, 0.138] <0.001 21% 0%
75 Meta Llama-3-8b-Chat-Hf (scratchpad with freeze values) 0.178 0.147 0.069 0.076 0.163 0.127 [0.116, 0.138] <0.001 25% 0%
76 Google Gemini-1.5-Pro (superforecaster with news 1) 0.156 0.251 0.085 0.100 0.203 0.128 [0.115, 0.14] <0.001 24% 0%
77 Anthropic Claude-3-Opus-20240229 (superforecaster with news 3) 0.150 0.135 0.102 0.105 0.143 0.128 [0.115, 0.14] <0.001 20% 0%
78 Qwen Qwen1.5-110B-Chat (scratchpad) 0.159 0.210 0.085 0.096 0.184 0.128 [0.116, 0.14] <0.001 22% 0%
79 Anthropic Claude-3-Opus-20240229 (superforecaster with news 2) 0.146 0.239 0.098 0.111 0.192 0.128 [0.115, 0.142] <0.001 23% 0%
80 Mistral AI Mixtral-8x22B-Instruct-V0.1 (scratchpad with news) 0.161 0.198 0.087 0.097 0.179 0.129 [0.116, 0.141] <0.001 22% 0%
81 Meta Llama-3-8b-Chat-Hf (zero shot) 0.163 0.268 0.081 0.097 0.215 0.130 [0.116, 0.144] <0.001 26% 0%
82 Google Gemini-1.5-Pro (zero shot) 0.154 0.280 0.090 0.106 0.217 0.130 [0.115, 0.145] <0.001 22% 0%
83 Google Gemini-1.5-Flash (scratchpad with news) 0.159 0.221 0.091 0.102 0.190 0.131 [0.118, 0.144] <0.001 23% 0%
84 Google Gemini-1.5-Flash (zero shot) 0.147 0.222 0.104 0.114 0.185 0.131 [0.117, 0.145] <0.001 22% 0%
85 Mistral AI Mistral-Large-Latest (scratchpad with news with freeze values) 0.163 0.178 0.091 0.099 0.171 0.131 [0.118, 0.143] <0.001 23% 0%
86 Mistral AI Mistral-Large-Latest (superforecaster with news 2) 0.142 0.206 0.114 0.122 0.174 0.132 [0.119, 0.146] <0.001 22% 0%
87 Anthropic Claude-3-Opus-20240229 (scratchpad with news) 0.168 0.212 0.087 0.098 0.190 0.133 [0.119, 0.146] <0.001 22% 0%
88 OpenAI GPT-4-Turbo-2024-04-09 (superforecaster with news 1) 0.164 0.253 0.088 0.102 0.209 0.133 [0.12, 0.147] <0.001 22% 0%
89 OpenAI GPT-4o (superforecaster with news 1) 0.167 0.267 0.085 0.101 0.217 0.134 [0.119, 0.148] <0.001 24% 0%
90 Mistral AI Mixtral-8x22B-Instruct-V0.1 (superforecaster with news 3) 0.176 0.146 0.087 0.092 0.161 0.134 [0.122, 0.146] <0.001 21% 0%
91 Anthropic Claude-3-Opus-20240229 (scratchpad with news with freeze values) 0.168 0.212 0.090 0.100 0.190 0.134 [0.121, 0.148] <0.001 22% 0%
92 Mistral AI Mixtral-8x22B-Instruct-V0.1 (zero shot) 0.150 0.264 0.106 0.119 0.207 0.135 [0.12, 0.149] <0.001 24% 0%
93 Mistral AI Mistral-Large-Latest (scratchpad with news) 0.163 0.169 0.102 0.108 0.166 0.135 [0.123, 0.148] <0.001 23% 0%
94 Mistral AI Mistral-Large-Latest (superforecaster with news 1) 0.160 0.252 0.099 0.112 0.206 0.136 [0.123, 0.15] <0.001 23% 0%
95 OpenAI GPT-4o (superforecaster with news 2) 0.181 0.176 0.083 0.091 0.179 0.136 [0.122, 0.151] <0.001 24% 0%
96 Qwen Qwen1.5-110B-Chat (superforecaster with news 3) 0.176 0.172 0.091 0.098 0.174 0.137 [0.125, 0.149] <0.001 22% 0%
97 Mistral AI Mixtral-8x22B-Instruct-V0.1 (superforecaster with news 1) 0.166 0.246 0.096 0.109 0.206 0.138 [0.124, 0.152] <0.001 21% 0%
98 Anthropic Claude-2.1 (superforecaster with news 3) 0.166 0.159 0.107 0.111 0.163 0.139 [0.126, 0.152] <0.001 23% 2%
99 Anthropic Claude-2.1 (zero shot) 0.177 0.175 0.096 0.103 0.176 0.140 [0.128, 0.153] <0.001 21% 0%
100 Google Gemini-1.5-Flash (superforecaster with news 2) 0.166 0.250 0.105 0.117 0.208 0.142 [0.128, 0.156] <0.001 23% 0%
101 Mistral AI Mixtral-8x7B-Instruct-V0.1 (zero shot) 0.167 0.210 0.108 0.117 0.189 0.142 [0.125, 0.158] <0.001 24% 0%
102 Mistral AI Mixtral-8x7B-Instruct-V0.1 (superforecaster with news 1) 0.193 0.165 0.087 0.093 0.179 0.143 [0.13, 0.157] <0.001 27% 23%
103 Meta Llama-3-8b-Chat-Hf (scratchpad) 0.178 0.187 0.102 0.109 0.182 0.144 [0.131, 0.156] <0.001 23% 0%
104 OpenAI GPT-4-Turbo-2024-04-09 (superforecaster with news 2) 0.174 0.173 0.110 0.115 0.173 0.144 [0.13, 0.159] <0.001 24% 0%
105 Mistral AI Mistral-Large-Latest (superforecaster with news 3) 0.192 0.187 0.089 0.097 0.189 0.145 [0.132, 0.157] <0.001 22% 0%
106 Anthropic Claude-2.1 (superforecaster with news 2) 0.176 0.201 0.108 0.116 0.188 0.146 [0.132, 0.16] <0.001 26% 18%
107 Qwen Qwen1.5-110B-Chat (superforecaster with news 2) 0.181 0.207 0.103 0.112 0.194 0.147 [0.134, 0.16] <0.001 22% 0%
108 Mistral AI Mixtral-8x7B-Instruct-V0.1 (scratchpad with freeze values) 0.163 0.181 0.126 0.131 0.172 0.147 [0.132, 0.162] <0.001 24% 16%
109 Mistral AI Mixtral-8x22B-Instruct-V0.1 (superforecaster with news 2) 0.188 0.185 0.099 0.106 0.186 0.147 [0.134, 0.16] <0.001 23% 0%
110 Meta Llama-2-70b-Chat-Hf (zero shot with freeze values) 0.188 0.125 0.105 0.106 0.156 0.147 [0.133, 0.161] <0.001 25% 0%
111 Google Gemini-1.5-Flash (superforecaster with news 1) 0.170 0.308 0.108 0.125 0.239 0.148 [0.132, 0.163] <0.001 24% 0%
112 Meta Llama-2-70b-Chat-Hf (scratchpad with freeze values) 0.181 0.179 0.109 0.115 0.180 0.148 [0.136, 0.16] <0.001 22% 0%
113 Mistral AI Mixtral-8x7B-Instruct-V0.1 (scratchpad with news with freeze values) 0.215 0.155 0.074 0.081 0.185 0.148 [0.134, 0.162] <0.001 28% 20%
114 Google Gemini-1.5-Flash (superforecaster with news 3) 0.184 0.191 0.105 0.113 0.188 0.148 [0.135, 0.161] <0.001 21% 0%
115 Mistral AI Mixtral-8x7B-Instruct-V0.1 (superforecaster with news 2) 0.198 0.142 0.096 0.100 0.170 0.149 [0.134, 0.164] <0.001 28% 27%
116 Mistral AI Mixtral-8x7B-Instruct-V0.1 (superforecaster with news 3) 0.202 0.138 0.092 0.096 0.170 0.149 [0.136, 0.162] <0.001 25% 19%
117 Anthropic Claude-3-Haiku-20240307 (scratchpad with freeze values) 0.188 0.170 0.105 0.111 0.179 0.149 [0.137, 0.162] <0.001 22% 0%
118 Anthropic Claude-3-Haiku-20240307 (zero shot with freeze values) 0.231 0.136 0.069 0.075 0.184 0.153 [0.139, 0.166] <0.001 25% 0%
119 Anthropic Claude-3-Haiku-20240307 (scratchpad) 0.188 0.194 0.111 0.118 0.191 0.153 [0.14, 0.166] <0.001 22% 0%
120 Google Gemini-1.5-Pro (superforecaster with news 2) 0.183 0.264 0.112 0.125 0.224 0.154 [0.138, 0.17] <0.001 22% 0%
121 Anthropic Claude-3-Haiku-20240307 (superforecaster with news 2) 0.191 0.188 0.114 0.120 0.190 0.156 [0.142, 0.169] <0.001 22% 0%
122 Mistral AI Mixtral-8x7B-Instruct-V0.1 (scratchpad with news) 0.215 0.208 0.089 0.099 0.212 0.157 [0.142, 0.172] <0.001 27% 19%
123 OpenAI GPT-3.5-Turbo-0125 (scratchpad with freeze values) 0.195 0.250 0.107 0.119 0.222 0.157 [0.144, 0.17] <0.001 22% 0%
124 Meta Llama-2-70b-Chat-Hf (scratchpad) 0.181 0.221 0.125 0.133 0.201 0.157 [0.144, 0.17] <0.001 21% 0%
125 Anthropic Claude-3-Haiku-20240307 (scratchpad with news with freeze values) 0.208 0.171 0.103 0.109 0.190 0.158 [0.145, 0.171] <0.001 22% 0%
126 Anthropic Claude-3-Haiku-20240307 (scratchpad with news) 0.208 0.192 0.104 0.112 0.200 0.160 [0.147, 0.173] <0.001 22% 0%
127 OpenAI GPT-3.5-Turbo-0125 (scratchpad) 0.195 0.238 0.121 0.132 0.217 0.163 [0.15, 0.177] <0.001 21% 0%
128 Anthropic Claude-2.1 (superforecaster with news 1) 0.212 0.262 0.104 0.118 0.237 0.165 [0.151, 0.18] <0.001 23% 5%
129 ForecastBench Always 0.5 0.203 0.221 0.119 0.128 0.212 0.165 [0.153, 0.178] <0.001 18% 0%
130 Anthropic Claude-3-Haiku-20240307 (zero shot) 0.231 0.152 0.097 0.102 0.192 0.166 [0.152, 0.18] <0.001 23% 0%
131 Meta Llama-2-70b-Chat-Hf (zero shot) 0.188 0.205 0.153 0.157 0.196 0.173 [0.157, 0.188] <0.001 22% 1%
132 Anthropic Claude-3-Haiku-20240307 (superforecaster with news 3) 0.219 0.221 0.128 0.136 0.220 0.177 [0.163, 0.192] <0.001 21% 0%
133 Anthropic Claude-3-Haiku-20240307 (superforecaster with news 1) 0.218 0.240 0.138 0.147 0.229 0.182 [0.167, 0.198] <0.001 22% 0%
134 OpenAI GPT-3.5-Turbo-0125 (zero shot with freeze values) 0.300 0.159 0.061 0.069 0.229 0.184 [0.167, 0.202] <0.001 33% 0%
135 ForecastBench Random Uniform 0.261 0.256 0.157 0.165 0.258 0.213 [0.193, 0.233] <0.001 25% 0%
136 OpenAI GPT-3.5-Turbo-0125 (zero shot) 0.300 0.238 0.129 0.138 0.269 0.219 [0.2, 0.238] <0.001 23% 0%
137 ForecastBench Always 0 0.276 0.346 0.183 0.197 0.311 0.236 [0.208, 0.265] <0.001 27% 0%
138 ForecastBench Always 1 0.491 0.500 0.412 0.420 0.495 0.455 [0.421, 0.489] <0.001 25% 0%