Human Combo Generated Leaderboard: overall

  • Ranking: The position of the model in the leaderboard as ordered by Overall Score
  • Organization: The group responsible for the model or forecasts
  • Model: The LLM model & prompt info or the human group and forecast aggregation method
    • zero shot: used a zero-shot prompt
    • scratchpad: used a scratchpad prompt with instructions that outline a procedure the model should use to reason about the question
    • with freeze values: means that, for questions from market sources, the prompt was supplemented with the aggregate human forecast from the relevant platform on the day the question set was generated
    • with news: means that the prompt was supplemented with relevant news summaries obtained through an automated process
  • Dataset Score: The average Brier score across all questions sourced from datasets
  • Market Score (resolved): The average Brier score across all resolved questions sourced from prediction markets and forecast aggregation platforms
  • Market Score (unresolved): The average Brier score across all unresolved questions sourced from prediction markets and forecast aggregation platforms
  • Market Score (overall): The average Brier score across all questions sourced from prediction markets and forecast aggregation platforms
  • Overall Resolved Score: The average of the Dataset Score and the Market Score (resolved) columns
  • Overall Score: The average of the Dataset Score and the Market Score (overall) columns
  • Overall Score 95% CI: The 95% confidence interval for the Overall Score
  • Pairwise p-value comparing to No. 1 (bootstrapped): The p-value calculated by bootstrapping the differences in overall score between each model and the best forecaster (the group with rank 1) under the null hypothesis that there's no difference.
  • Pct. more accurate than No. 1: The percent of questions where this forecaster had a better overall score than the best forecaster (with rank 1)
  • Pct. imputed: The percent of questions for which this forecaster did not provide a forecast and hence had a forecast value imputed (0.5 for dataset questions and the aggregate human forecast on the forecast due date for questions sourced from prediction markets or forecast aggregation platforms)
Ranking Organization Model Dataset Score (N=870) Market Score (resolved) (N=26) Market Score (unresolved) (N=275) Market Score (overall) (N=301) Overall Resolved Score (N=896) Overall Score (N=1,171) Overall Score 95% CI Pairwise p-value comparing to No. 1 (bootstrapped) Pct. more accurate than No. 1 Pct. Imputed
1 ForecastBench Superforecaster median forecast 0.097 0.064 0.037 0.040 0.080 0.068 [0.06, 0.077] 0% 0%
2 ForecastBench Public median forecast 0.127 0.137 0.029 0.039 0.132 0.083 [0.075, 0.091] <0.001 28% 0%
3 OpenAI GPT-4 (zero shot with freeze values) 0.131 0.159 0.030 0.041 0.145 0.086 [0.077, 0.095] <0.001 34% 0%
4 OpenAI GPT-4-Turbo-2024-04-09 (zero shot with freeze values) 0.135 0.159 0.029 0.040 0.147 0.088 [0.079, 0.097] <0.001 33% 0%
5 Anthropic Claude-3-5-Sonnet-20240620 (zero shot with freeze values) 0.114 0.273 0.042 0.062 0.193 0.088 [0.076, 0.1] <0.001 34% 0%
6 Anthropic Claude-3-5-Sonnet-20240620 (scratchpad with freeze values) 0.118 0.201 0.051 0.064 0.159 0.091 [0.08, 0.102] <0.001 28% 0%
7 OpenAI GPT-4o (scratchpad with freeze values) 0.134 0.158 0.042 0.052 0.146 0.093 [0.084, 0.102] <0.001 28% 0%
8 Anthropic Claude-3-5-Sonnet-20240620 (scratchpad with news with freeze values) 0.130 0.210 0.046 0.060 0.170 0.095 [0.085, 0.105] <0.001 28% 0%
9 OpenAI GPT-4o (scratchpad with news with freeze values) 0.142 0.103 0.044 0.049 0.123 0.096 [0.087, 0.104] <0.001 27% 0%
10 Mistral AI Mistral-Large-Latest (zero shot with freeze values) 0.134 0.171 0.051 0.062 0.153 0.098 [0.088, 0.108] <0.001 25% 0%
11 Mistral AI Mistral-Large-Latest (scratchpad with freeze values) 0.130 0.151 0.060 0.068 0.141 0.099 [0.091, 0.107] <0.001 24% 0%
12 Anthropic Claude-3-Opus-20240229 (zero shot with freeze values) 0.146 0.173 0.047 0.057 0.159 0.102 [0.091, 0.112] <0.001 30% 0%
13 Anthropic Claude-3-5-Sonnet-20240620 (scratchpad) 0.118 0.252 0.070 0.086 0.185 0.102 [0.09, 0.113] <0.001 26% 0%
14 Anthropic Claude-2.1 (scratchpad with freeze values) 0.163 0.033 0.043 0.042 0.098 0.102 [0.095, 0.11] <0.001 29% 27%
15 Meta Llama-3-70b-Chat-Hf (zero shot with freeze values) 0.139 0.156 0.059 0.067 0.147 0.103 [0.093, 0.114] <0.001 26% 0%
16 OpenAI GPT-4o (scratchpad) 0.134 0.192 0.062 0.073 0.163 0.104 [0.094, 0.113] <0.001 24% 0%
17 Anthropic Claude-3-5-Sonnet-20240620 (superforecaster with news 3) 0.135 0.196 0.062 0.074 0.165 0.104 [0.094, 0.114] <0.001 25% 3%
18 Qwen Qwen1.5-110B-Chat (zero shot with freeze values) 0.153 0.141 0.048 0.056 0.147 0.105 [0.096, 0.114] <0.001 26% 0%
19 Anthropic Claude-3-Opus-20240229 (scratchpad with freeze values) 0.135 0.158 0.068 0.076 0.146 0.105 [0.096, 0.115] <0.001 24% 0%
20 OpenAI GPT-4 (scratchpad with freeze values) 0.151 0.161 0.051 0.061 0.156 0.106 [0.097, 0.114] <0.001 24% 0%
21 Anthropic Claude-3-5-Sonnet-20240620 (scratchpad with news) 0.130 0.229 0.070 0.083 0.179 0.107 [0.096, 0.118] <0.001 24% 0%
22 Anthropic Claude-2.1 (scratchpad) 0.163 0.060 0.051 0.052 0.111 0.107 [0.099, 0.116] <0.001 27% 28%
23 Anthropic Claude-3-5-Sonnet-20240620 (zero shot) 0.114 0.295 0.082 0.101 0.205 0.107 [0.094, 0.12] <0.001 26% 0%
24 Meta Llama-3-70b-Chat-Hf (scratchpad with freeze values) 0.151 0.129 0.058 0.064 0.140 0.108 [0.099, 0.116] <0.001 23% 0%
25 Mistral AI Mixtral-8x22B-Instruct-V0.1 (zero shot with freeze values) 0.154 0.160 0.057 0.066 0.157 0.110 [0.099, 0.121] <0.001 29% 0%
26 Google Gemini-1.5-Flash (scratchpad with freeze values) 0.151 0.157 0.061 0.069 0.154 0.110 [0.1, 0.12] <0.001 25% 0%
27 Anthropic Claude-3-5-Sonnet-20240620 (superforecaster with news 1) 0.136 0.241 0.070 0.085 0.189 0.110 [0.099, 0.121] <0.001 24% 0%
28 OpenAI GPT-4o (zero shot with freeze values) 0.166 0.160 0.045 0.055 0.163 0.110 [0.1, 0.121] <0.001 30% 0%
29 OpenAI GPT-4o (scratchpad with news) 0.142 0.182 0.070 0.080 0.162 0.111 [0.1, 0.122] <0.001 23% 0%
30 Google Gemini-1.5-Pro (zero shot with freeze values) 0.154 0.178 0.058 0.068 0.166 0.111 [0.099, 0.123] <0.001 28% 0%
31 Google Gemini-1.5-Pro (scratchpad with freeze values) 0.139 0.199 0.073 0.084 0.169 0.111 [0.102, 0.121] <0.001 23% 0%
32 Anthropic Claude-3-Opus-20240229 (scratchpad) 0.135 0.186 0.079 0.088 0.160 0.111 [0.102, 0.121] <0.001 23% 0%
33 OpenAI GPT-4-Turbo-2024-04-09 (scratchpad with freeze values) 0.152 0.148 0.065 0.072 0.150 0.112 [0.101, 0.122] <0.001 26% 0%
34 OpenAI GPT-4-Turbo-2024-04-09 (zero shot) 0.135 0.236 0.075 0.089 0.186 0.112 [0.102, 0.123] <0.001 22% 0%
35 OpenAI GPT-4 (zero shot) 0.131 0.156 0.089 0.095 0.143 0.113 [0.103, 0.122] <0.001 24% 0%
36 Google Gemini-1.5-Pro (scratchpad with news with freeze values) 0.143 0.181 0.075 0.084 0.162 0.113 [0.103, 0.123] <0.001 22% 0%
37 Mistral AI Mistral-Large-Latest (scratchpad) 0.130 0.224 0.085 0.097 0.177 0.113 [0.104, 0.123] <0.001 23% 0%
38 Mistral AI Mixtral-8x22B-Instruct-V0.1 (scratchpad with freeze values) 0.150 0.162 0.069 0.077 0.156 0.114 [0.105, 0.123] <0.001 23% 0%
39 Google Gemini-1.5-Flash (zero shot with freeze values) 0.150 0.183 0.068 0.078 0.167 0.114 [0.101, 0.127] <0.001 27% 0%
40 Google Gemini-1.5-Pro (scratchpad) 0.139 0.209 0.079 0.090 0.174 0.115 [0.105, 0.124] <0.001 22% 0%
41 OpenAI GPT-4 (scratchpad) 0.151 0.153 0.072 0.079 0.152 0.115 [0.106, 0.123] <0.001 21% 0%
42 ForecastBench Imputed Forecaster 0.203 0.058 0.026 0.029 0.130 0.116 [0.107, 0.124] <0.001 32% 100%
43 Google Gemini-1.5-Pro (scratchpad with news) 0.143 0.175 0.083 0.091 0.159 0.117 [0.107, 0.127] <0.001 23% 0%
44 OpenAI GPT-4-Turbo-2024-04-09 (scratchpad) 0.152 0.202 0.071 0.082 0.177 0.117 [0.108, 0.127] <0.001 21% 0%
45 Anthropic Claude-2.1 (zero shot with freeze values) 0.179 0.159 0.046 0.055 0.169 0.117 [0.107, 0.127] <0.001 29% 0%
46 OpenAI GPT-4-Turbo-2024-04-09 (scratchpad with news with freeze values) 0.161 0.147 0.069 0.075 0.154 0.118 [0.108, 0.129] <0.001 26% 1%
47 Anthropic Claude-3-Opus-20240229 (zero shot) 0.146 0.220 0.081 0.093 0.183 0.119 [0.108, 0.131] <0.001 24% 0%
48 Anthropic Claude-3-Opus-20240229 (superforecaster with news 1) 0.134 0.244 0.093 0.106 0.189 0.120 [0.11, 0.131] <0.001 23% 0%
49 Meta Llama-3-70b-Chat-Hf (zero shot) 0.139 0.199 0.092 0.102 0.169 0.120 [0.11, 0.131] <0.001 23% 0%
50 Google Gemini-1.5-Flash (scratchpad) 0.151 0.193 0.080 0.090 0.172 0.120 [0.11, 0.13] <0.001 22% 0%
51 Meta Llama-3-8b-Chat-Hf (zero shot with freeze values) 0.164 0.209 0.064 0.077 0.187 0.120 [0.109, 0.132] <0.001 28% 0%
52 Qwen Qwen1.5-110B-Chat (scratchpad with news with freeze values) 0.155 0.172 0.078 0.086 0.163 0.121 [0.111, 0.13] <0.001 22% 0%
53 Qwen Qwen1.5-110B-Chat (scratchpad with freeze values) 0.161 0.184 0.074 0.084 0.172 0.123 [0.113, 0.132] <0.001 22% 0%
54 Mistral AI Mistral-Large-Latest (zero shot) 0.134 0.187 0.104 0.112 0.160 0.123 [0.111, 0.134] <0.001 20% 0%
55 Qwen Qwen1.5-110B-Chat (zero shot) 0.153 0.173 0.085 0.092 0.163 0.123 [0.113, 0.133] <0.001 19% 2%
56 Mistral AI Mixtral-8x22B-Instruct-V0.1 (scratchpad) 0.150 0.204 0.086 0.096 0.177 0.123 [0.114, 0.133] <0.001 22% 0%
57 Google Gemini-1.5-Flash (scratchpad with news with freeze values) 0.161 0.187 0.077 0.087 0.174 0.124 [0.113, 0.134] <0.001 22% 0%
58 Mistral AI Mixtral-8x7B-Instruct-V0.1 (scratchpad) 0.165 0.096 0.082 0.084 0.130 0.124 [0.115, 0.134] <0.001 25% 23%
59 Google Gemini-1.5-Pro (superforecaster with news 3) 0.158 0.175 0.083 0.091 0.167 0.125 [0.114, 0.135] <0.001 24% 0%
60 Mistral AI Mixtral-8x7B-Instruct-V0.1 (zero shot with freeze values) 0.165 0.172 0.076 0.084 0.168 0.125 [0.111, 0.139] <0.001 32% 0%
61 OpenAI GPT-4o (superforecaster with news 3) 0.170 0.160 0.073 0.080 0.165 0.125 [0.115, 0.135] <0.001 22% 7%
62 Anthropic Claude-3-5-Sonnet-20240620 (superforecaster with news 2) 0.159 0.208 0.079 0.091 0.184 0.125 [0.114, 0.136] <0.001 24% 0%
63 OpenAI GPT-4o (zero shot) 0.166 0.229 0.071 0.085 0.197 0.125 [0.114, 0.136] <0.001 23% 0%
64 ForecastBench LLM Crowd (gpt-4o, claude-3.5-sonnet, gemini-1.5-pro) with news 0.174 0.203 0.065 0.077 0.188 0.126 [0.116, 0.135] <0.001 22% 29%
65 ForecastBench LLM Crowd (gpt-4o, claude-3.5-sonnet, gemini-1.5-pro) with news 0.175 0.195 0.065 0.077 0.185 0.126 [0.116, 0.135] <0.001 23% 29%
66 Anthropic Claude-2.1 (scratchpad with news with freeze values) 0.172 0.121 0.076 0.080 0.146 0.126 [0.116, 0.136] <0.001 23% 4%
67 OpenAI GPT-4-Turbo-2024-04-09 (scratchpad with news) 0.161 0.233 0.078 0.091 0.197 0.126 [0.115, 0.137] <0.001 24% 0%
68 OpenAI GPT-4-Turbo-2024-04-09 (superforecaster with news 3) 0.168 0.155 0.078 0.085 0.161 0.126 [0.116, 0.136] <0.001 23% 13%
69 Mistral AI Mixtral-8x22B-Instruct-V0.1 (scratchpad with news with freeze values) 0.163 0.161 0.083 0.090 0.162 0.126 [0.116, 0.137] <0.001 22% 0%
70 Qwen Qwen1.5-110B-Chat (scratchpad with news) 0.155 0.173 0.091 0.098 0.164 0.126 [0.117, 0.136] <0.001 21% 0%
71 Anthropic Claude-2.1 (scratchpad with news) 0.172 0.186 0.071 0.081 0.179 0.126 [0.116, 0.136] <0.001 24% 13%
72 Qwen Qwen1.5-110B-Chat (superforecaster with news 1) 0.155 0.285 0.081 0.098 0.220 0.127 [0.115, 0.138] <0.001 22% 0%
73 ForecastBench LLM Crowd (gpt-4o, claude-3.5-sonnet, gemini-1.5-pro) with news 0.175 0.202 0.067 0.079 0.189 0.127 [0.117, 0.137] <0.001 22% 29%
74 Anthropic Claude-3-Opus-20240229 (superforecaster with news 3) 0.152 0.135 0.102 0.105 0.144 0.129 [0.118, 0.139] <0.001 21% 9%
75 Qwen Qwen1.5-110B-Chat (scratchpad) 0.161 0.210 0.085 0.096 0.185 0.129 [0.119, 0.139] <0.001 21% 0%
76 Meta Llama-3-70b-Chat-Hf (scratchpad) 0.151 0.209 0.098 0.107 0.180 0.129 [0.119, 0.139] <0.001 21% 0%
77 Google Gemini-1.5-Pro (superforecaster with news 1) 0.159 0.251 0.085 0.100 0.205 0.129 [0.119, 0.14] <0.001 24% 0%
78 Meta Llama-3-8b-Chat-Hf (scratchpad with freeze values) 0.183 0.147 0.069 0.076 0.165 0.130 [0.121, 0.138] <0.001 24% 0%
79 Mistral AI Mixtral-8x22B-Instruct-V0.1 (scratchpad with news) 0.163 0.198 0.087 0.097 0.181 0.130 [0.12, 0.14] <0.001 22% 0%
80 Google Gemini-1.5-Pro (zero shot) 0.154 0.280 0.090 0.106 0.217 0.130 [0.117, 0.143] <0.001 22% 0%
81 Meta Llama-3-8b-Chat-Hf (zero shot) 0.164 0.268 0.081 0.097 0.216 0.131 [0.118, 0.143] <0.001 26% 0%
82 Anthropic Claude-3-Opus-20240229 (superforecaster with news 2) 0.151 0.239 0.098 0.111 0.195 0.131 [0.119, 0.143] <0.001 22% 0%
83 Google Gemini-1.5-Flash (scratchpad with news) 0.161 0.221 0.091 0.102 0.191 0.131 [0.12, 0.143] <0.001 22% 0%
84 Mistral AI Mistral-Large-Latest (scratchpad with news with freeze values) 0.165 0.178 0.091 0.099 0.172 0.132 [0.121, 0.143] <0.001 23% 0%
85 Google Gemini-1.5-Flash (zero shot) 0.150 0.222 0.104 0.114 0.186 0.132 [0.12, 0.145] <0.001 22% 0%
86 OpenAI GPT-4o (superforecaster with news 1) 0.167 0.267 0.085 0.101 0.217 0.134 [0.121, 0.146] <0.001 24% 0%
87 Anthropic Claude-3-Opus-20240229 (scratchpad with news) 0.170 0.212 0.087 0.098 0.191 0.134 [0.123, 0.145] <0.001 22% 0%
88 OpenAI GPT-4-Turbo-2024-04-09 (superforecaster with news 1) 0.166 0.253 0.088 0.102 0.210 0.134 [0.123, 0.146] <0.001 21% 0%
89 Anthropic Claude-3-Opus-20240229 (scratchpad with news with freeze values) 0.170 0.212 0.090 0.100 0.191 0.135 [0.124, 0.146] <0.001 22% 0%
90 Mistral AI Mixtral-8x22B-Instruct-V0.1 (superforecaster with news 3) 0.179 0.146 0.087 0.092 0.163 0.136 [0.126, 0.146] <0.001 20% 18%
91 Mistral AI Mistral-Large-Latest (scratchpad with news) 0.165 0.169 0.102 0.108 0.167 0.136 [0.126, 0.147] <0.001 23% 0%
92 Mistral AI Mixtral-8x22B-Instruct-V0.1 (zero shot) 0.154 0.264 0.106 0.119 0.209 0.137 [0.124, 0.15] <0.001 23% 0%
93 Mistral AI Mistral-Large-Latest (superforecaster with news 2) 0.153 0.206 0.114 0.122 0.179 0.137 [0.126, 0.149] <0.001 21% 1%
94 Mistral AI Mistral-Large-Latest (superforecaster with news 1) 0.163 0.252 0.099 0.112 0.207 0.138 [0.126, 0.149] <0.001 23% 0%
95 Mistral AI Mixtral-8x22B-Instruct-V0.1 (superforecaster with news 1) 0.167 0.246 0.096 0.109 0.207 0.138 [0.126, 0.151] <0.001 21% 0%
96 Qwen Qwen1.5-110B-Chat (superforecaster with news 3) 0.180 0.172 0.091 0.098 0.176 0.139 [0.129, 0.149] <0.001 22% 6%
97 OpenAI GPT-4o (superforecaster with news 2) 0.189 0.176 0.083 0.091 0.183 0.140 [0.128, 0.152] <0.001 23% 1%
98 Anthropic Claude-2.1 (zero shot) 0.179 0.175 0.096 0.103 0.177 0.141 [0.13, 0.152] <0.001 20% 0%
99 Mistral AI Mixtral-8x7B-Instruct-V0.1 (zero shot) 0.165 0.210 0.108 0.117 0.188 0.141 [0.127, 0.155] <0.001 25% 0%
100 Anthropic Claude-2.1 (superforecaster with news 3) 0.171 0.159 0.107 0.111 0.165 0.141 [0.13, 0.152] <0.001 22% 6%
101 Mistral AI Mistral-Large-Latest (superforecaster with news 3) 0.190 0.187 0.089 0.097 0.188 0.144 [0.133, 0.154] <0.001 22% 8%
102 Mistral AI Mixtral-8x7B-Instruct-V0.1 (superforecaster with news 1) 0.196 0.165 0.087 0.093 0.181 0.145 [0.133, 0.156] <0.001 27% 22%
103 Google Gemini-1.5-Flash (superforecaster with news 2) 0.172 0.250 0.105 0.117 0.211 0.145 [0.133, 0.157] <0.001 22% 1%
104 Meta Llama-3-8b-Chat-Hf (scratchpad) 0.183 0.187 0.102 0.109 0.185 0.146 [0.136, 0.156] <0.001 22% 0%
105 Qwen Qwen1.5-110B-Chat (superforecaster with news 2) 0.183 0.207 0.103 0.112 0.195 0.148 [0.137, 0.159] <0.001 22% 5%
106 Mistral AI Mixtral-8x7B-Instruct-V0.1 (scratchpad with freeze values) 0.165 0.181 0.126 0.131 0.173 0.148 [0.135, 0.161] <0.001 24% 16%
107 Meta Llama-2-70b-Chat-Hf (zero shot with freeze values) 0.190 0.125 0.105 0.106 0.157 0.148 [0.136, 0.16] <0.001 23% 0%
108 Anthropic Claude-2.1 (superforecaster with news 2) 0.181 0.201 0.108 0.116 0.191 0.149 [0.137, 0.16] <0.001 25% 19%
109 Meta Llama-2-70b-Chat-Hf (scratchpad with freeze values) 0.183 0.179 0.109 0.115 0.181 0.149 [0.139, 0.159] <0.001 21% 0%
110 Mistral AI Mixtral-8x22B-Instruct-V0.1 (superforecaster with news 2) 0.192 0.185 0.099 0.106 0.188 0.149 [0.138, 0.16] <0.001 23% 3%
111 Google Gemini-1.5-Flash (superforecaster with news 3) 0.186 0.191 0.105 0.113 0.189 0.149 [0.138, 0.16] <0.001 21% 13%
112 OpenAI GPT-4-Turbo-2024-04-09 (superforecaster with news 2) 0.183 0.173 0.110 0.115 0.178 0.149 [0.137, 0.162] <0.001 24% 2%
113 Mistral AI Mixtral-8x7B-Instruct-V0.1 (superforecaster with news 3) 0.202 0.138 0.092 0.096 0.170 0.149 [0.139, 0.16] <0.001 25% 22%
114 Google Gemini-1.5-Flash (superforecaster with news 1) 0.175 0.308 0.108 0.125 0.241 0.150 [0.137, 0.163] <0.001 24% 0%
115 Anthropic Claude-3-Haiku-20240307 (scratchpad with freeze values) 0.190 0.170 0.105 0.111 0.180 0.150 [0.14, 0.161] <0.001 22% 0%
116 Mistral AI Mixtral-8x7B-Instruct-V0.1 (superforecaster with news 2) 0.203 0.142 0.096 0.100 0.173 0.152 [0.14, 0.164] <0.001 27% 28%
117 Mistral AI Mixtral-8x7B-Instruct-V0.1 (scratchpad with news with freeze values) 0.223 0.155 0.074 0.081 0.189 0.152 [0.14, 0.163] <0.001 26% 20%
118 Anthropic Claude-3-Haiku-20240307 (scratchpad) 0.190 0.194 0.111 0.118 0.192 0.154 [0.143, 0.165] <0.001 21% 0%
119 Anthropic Claude-3-Haiku-20240307 (zero shot with freeze values) 0.234 0.136 0.069 0.075 0.185 0.154 [0.144, 0.165] <0.001 25% 0%
120 Google Gemini-1.5-Pro (superforecaster with news 2) 0.187 0.264 0.112 0.125 0.226 0.156 [0.142, 0.17] <0.001 22% 0%
121 Anthropic Claude-3-Haiku-20240307 (superforecaster with news 2) 0.192 0.188 0.114 0.120 0.190 0.156 [0.145, 0.167] <0.001 21% 0%
122 OpenAI GPT-3.5-Turbo-0125 (scratchpad with freeze values) 0.197 0.250 0.107 0.119 0.223 0.158 [0.147, 0.169] <0.001 22% 0%
123 Meta Llama-2-70b-Chat-Hf (scratchpad) 0.183 0.221 0.125 0.133 0.202 0.158 [0.147, 0.17] <0.001 20% 0%
124 Anthropic Claude-3-Haiku-20240307 (scratchpad with news with freeze values) 0.211 0.171 0.103 0.109 0.191 0.160 [0.149, 0.171] <0.001 22% 0%
125 Mistral AI Mixtral-8x7B-Instruct-V0.1 (scratchpad with news) 0.223 0.208 0.089 0.099 0.216 0.161 [0.149, 0.173] <0.001 25% 19%
126 Anthropic Claude-3-Haiku-20240307 (scratchpad with news) 0.211 0.192 0.104 0.112 0.201 0.161 [0.15, 0.172] <0.001 22% 0%
127 OpenAI GPT-3.5-Turbo-0125 (scratchpad) 0.197 0.238 0.121 0.132 0.217 0.164 [0.153, 0.176] <0.001 21% 0%
128 ForecastBench Always 0.5 0.203 0.221 0.119 0.128 0.212 0.165 [0.155, 0.176] <0.001 18% 0%
129 Anthropic Claude-2.1 (superforecaster with news 1) 0.213 0.262 0.104 0.118 0.238 0.165 [0.153, 0.178] <0.001 22% 6%
130 Anthropic Claude-3-Haiku-20240307 (zero shot) 0.234 0.152 0.097 0.102 0.193 0.168 [0.157, 0.179] <0.001 23% 0%
131 Meta Llama-2-70b-Chat-Hf (zero shot) 0.190 0.205 0.153 0.157 0.197 0.174 [0.16, 0.187] <0.001 21% 1%
132 Anthropic Claude-3-Haiku-20240307 (superforecaster with news 3) 0.213 0.221 0.128 0.136 0.217 0.175 [0.163, 0.186] <0.001 21% 22%
133 Anthropic Claude-3-Haiku-20240307 (superforecaster with news 1) 0.224 0.240 0.138 0.147 0.232 0.185 [0.172, 0.199] <0.001 21% 0%
134 OpenAI GPT-3.5-Turbo-0125 (zero shot with freeze values) 0.303 0.159 0.061 0.069 0.231 0.186 [0.172, 0.2] <0.001 33% 0%
135 ForecastBench Random Uniform 0.266 0.256 0.157 0.165 0.261 0.216 [0.199, 0.232] <0.001 24% 0%
136 OpenAI GPT-3.5-Turbo-0125 (zero shot) 0.303 0.238 0.129 0.138 0.270 0.220 [0.205, 0.236] <0.001 23% 0%
137 ForecastBench Always 0 0.267 0.346 0.183 0.197 0.306 0.232 [0.208, 0.256] <0.001 30% 0%
138 ForecastBench Always 1 0.494 0.500 0.412 0.420 0.497 0.457 [0.428, 0.487] <0.001 26% 0%