Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Curious that it sets up the math problem right, got the value wrong, but it was close enough that it got the answer right.

I wonder why it gets it wrong when it spits out the value? I figure 25/cos(10°) is around 25.38. GPT says it’s 25.44.

I can’t wait for the next iteration of these tools that have agency to reach out to a service for an answer, like Wolfram or a Python interpreter or any expert/oracle.

I think it would be cool to see which circumstances even prompted the AI to delegate to the expert for an answer - what criteria would be used to signal that it doesn’t quite know the answer, or that it shouldn’t guess?

I know there’s something along these lines with autogpt and/or agentgpt but I wasn’t super impressed with it when I looked at them both. Granted this was a few months ago.



> I can’t wait for the next iteration of these tools that have agency to reach out to a service for an answer, like Wolfram or a Python interpreter or any expert/oracle.

ChatGPT-4 has a plugin system, and there is already a Wolfram plugin.

Using that plugin, ChatGPT-4 is happy to tell me that the exact answer: 25 sec(π/18), as well as the decimal approximation of 25.3857.

https://chat.openai.com/share/468db5e9-4983-4bf6-9efb-f42783...

That link doesn't properly show that off, so here's a screenshot: https://i.imgur.com/foE5hgR.png




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: