Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
I was wrong. The ChatGPT code interpreter is OP (knowsuchagency.notion.site)
11 points by knowsuchagency on May 26, 2023 | hide | past | favorite | 24 comments


Having watched the video the article links to:

https://www.youtube.com/watch?v=O8GUH0_htRM and the followup video here: https://www.youtube.com/watch?v=_njf22xx8BQ

... it's a bad day at the office if you work at a place like Tableau, MathWorks or Wolfram. CI isn't yet a replacement for any of those companies' products, but who's going to sign any long-term contracts with them from this point on? Not me. It's hard to see what this model can't do, given sufficient development.

Calling this thing "Code Interpreter" is an interesting decision on OpenAI's part. It looks more like the Babel Fish of data science. Utterly amazing.


Yes I am a PM, product manager. I've been learning python. My dream is, I can go from client engagement, data digestion to insights and analytics delivered as a one man team.

What used to take a team a week, I want to be able to do myself in one to two days.


Right? It didn't click for me until I watched those


Yeah, this is a pretty huge deal. Absolutely nothing is going to look the same in a couple of years. If I didn't think that way before, I do now.


Why would Wolfram be worried? Their core product (Mathematica) is perfectly symbiotic with LLMs.


The impression I got from the videos is that CI is already capable of outperforming Alpha, at least in some respects. There's a Wolfram Alpha plugin, but CI seemed to return better results.

As I understand it, Mathematica is more or less a client-side Alpha at this point, and vice versa.


>Mathematica is more or less a client-side Alpha at this point

They're not really comparable. Wolfram Alpha exposes a small fraction of the symbolic/numeric algorithms Mathematica has, and the reasons why someone would choose Mathematica over e.g. SymPy+NumPy won't really change with LLMs in the mix. If anything, in the age of LLM generated code, Mathematica might even have an advantage over the others because of how concise and uniform the language is, and Wolfram has already shown interest in integrating with chatGPT.

I don't see Mathworks really being affected either, since as I understand most of their money is made via the proprietary toolkits (otherwise someone could just use Octave). Tableau is probably toast though.


Using CI for Code Interpreter instead of Continuous Integration made this very confusing.


Here are the standout features of the code interpreter plugin, based on my experience using it ~10 times:

1. Handle Large Files with Ease: Unlike the model's limited context window, this plugin accepts significantly larger files. It stores the file and executes Python code on it whenever necessary. It's worth noting that the code can utilize a specific set of libraries, including PyPDF2, but not AI libraries.

2. Generate and Execute Code: This plugin goes beyond code generation. It also executes the generated code and provides you with the output.

In a nutshell, the code interpreter plugin saves you from the hassle of:

- Describing the input file format to ChatGPT, as it automatically adapts the code accordingly.

- Manually copying code from ChatGPT to a Jupyter notebook to view the output.


The point is this kind of 'on-the-fly' software or ui dispenses with the need of countless small web apps and parts of larger software.

"Display my calendar for the next 3 weeks, in a vertical column per day, divided into hours, with an expanded fisheye view of morning work hours and highlight any out-of-office meetings. Make meetings clickable, and when clicked, open a page including more details of the meeting and it's participants, and links to their facebook pages. Save the program as part of my dashboard"

Etc.


There are rumors that the 'code interpreter' plugin in particular is different, like it uses more advanced training possibly even a more advanced pre-trained model and that it's better understood as like GPT-4.5. I personally don't have access so I can't even give personal anecdotal evidence though.


Yeah but is the heatmap correct, or did the LLM just hallucinate the results?


When I've used the code interpreter in similar cases, it generates plots using Python code that it provides. So typical not hallucinating, and it shows its work (the Python) which lets me verify.


So, ChatGPT actually generates Python code, runs it and outputs the results?

This means that the "only" addition is an API that ChatGPT can sometimes "decide" to use to run some Python code?


That's the key--you can see the code it uses to do its analysis and render plots


Even the term hallucinate gives them too much credit. It's more accurate to say they just make stuff up that sometimes happens to be accurate or useful.


You're right. I was using the popular term, but I think "confabulate" is more correct. https://www.technologyreview.com/2023/05/02/1072528/geoffrey...


I have been unable to find the Code Interpreter plug-in, it is not in the plug-in list or model list. How do I gain access to it?


You need to be a plus subscriber. I don't know if they've rolled out the feature to all paid subscribers, however.


I am a Plus subscriber. Can’t even find a way to sign up for a waitlist.


If you click the "..." icon next to your name at the lower-left corner of the window, you can enter a 'Settings' menu that lets you enable plugins via 'Beta features'. Nothing seems to change when I do that, and I don't see any other buttons to push to activate Code Interpreter or any other plugin features.


OP?



Over Powered




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: