It’s funny how whenever I try to talk to an #LLM about something factual, I almost instinctively apply my anti-bullshit prompts for humans, like:
- “What % of your net worth would you bet on it?”
- “Would you short their stock?” (or equivalent)
- “Give me a probability of X within the next Y
?” (or equivalent)
And, naturally, it replies like “I’m not giving financial advise” or “it’s hard to estimate the probability, here’s why it might happen, here’s why not”.
Of course it’s hard! Giving definite answers rather than “yes and no” is demanding, but that’s how humans hold each other accountable! Y’know, I don’t even ask for precision. Give me the bounds of a rectangular distribution, and it’s still fine for me if it’s not [0, 1].
But LLMs are well-trained on a million ways humans avoid answering questions. Just shows how much room for improvement there’s still left.
#AI