Not OP, but you can ask ChatGPT in plain language for a command to scan a server or network for certain things and it can generate you those commands in seconds
Why not just google it? Theres always a blog or stackoverflow question to copy snippets from.
What keeps me away is that unless youre already familiar with a topic you wont catch errors. The command it suggests might not do exactly what you wanted.
Lets try an example. This took chatGPT seconds to answer. Tell me how long it would take a person with zero ffmpeg experience to come up with the same solution using just google / stack overflow.
---
Create me a command using FFMPEG to convert all video files in a directory. The source files are mkv files in 4k, and they should be output as mp4 files with h264 encoding at 720p. Also, produce a thumbnail image for each cropped to 320x320 pixels using a frame of the video 10 seconds from the beginning.
Also, this does not just have to be blindly copy/pasted. ChatGPT also went on to break down exactly what this command does so you can revise it / understand it.
Here's a breakdown of the ffmpeg command and its flags used in the solution:
* -i "$file": Specifies the input file.
* -c:v libx264: Sets the video codec to libx264, which is a widely used H.264 video encoder.
* -preset slow: Specifies the encoding preset as "slow." The encoding presets range from "ultrafast" (fastest but lower compression) to "veryslow" (slowest but higher compression). "Slow" strikes a balance between compression efficiency and encoding time.
* -crf 18: Sets the Constant Rate Factor (CRF) for video quality. A lower CRF value, such as 18, indicates higher quality and larger file sizes. You can adjust this value based on your desired trade-off between quality and file size.
* -c:a aac -b:a 192k: Sets the audio codec to AAC with a bitrate of 192 kbps. Adjust the bitrate value as per your requirements.
* -vf "select=eq(n\,300)": Specifies a video filter to select the frame at the 300th second (10 seconds from the beginning) of the video. You can modify the value inside the eq() function to select a different frame.
* -s 1280x720: Sets the output resolution of the video to 1280x720 pixels, representing 720p resolution.
* -frames:v 1: Specifies that only one frame should be extracted from the video.
* "${file%.*}.mp4": Generates the output MP4 file with the same name as the input file, but with the extension changed to .mp4.
Errors like suggesting incompatible combinations of parameters or hallucinated non-existant params you will easily catch once you try them. But what about more subtle conceptual misunderstandings?
I meant cases like asking how to X and ChatGPT suggesting something similar (and valid) but not what I wanted. Or imagine silent failures like an additional switch that happens to exclude what you wanted.
Not an issue if youre just automating grunt work you know to do yourself of course. But I'm interested in when you arent familiar with a tool.
Google and stackoverflow have the same issue. Often I won't find the answer I'm looking for, and I'll have to use something that's close to my problem but not exactly the same. ChatGPT has the same issue, but I can tell him what work and what didn't and he'll give me an updated answer.
The alternative to that is to open 15 tabs with stackoverflow, forum threads, github issues, reddit. For the problems I usually encouter, it makes sense to ask ChatGPT first, and if I see that I'm getting nowhere after 2/3 replies I'll fall back on Google, documentation, trying something else.
I've expressed this badly. By "but not what I wanted" I don't mean it suggesting an alternative. I mean it giving an answer claiming to do what I asked for, but which doesn't actually. I would not be able to catch this. I hope this clarifies why I gave that as an example to my uncertainty of ChatGPT giving wrong answers.
On stackoverflow and the like you will know that the question does not apply to you.
If the answer doesn't do what I want, either I see it and can fix this, or I don't and then the origin of the answer doesn't really matter. That's my experience solving my problems, and I'm sure other people may have different experiences that leads them to different conclusions. But for me, for now, ChatGPT as a first step makes sense.
> I mean it giving an answer claiming to do what I asked for, but which doesn't actually. I would not be able to catch this. I hope this clarifies why I gave that as an example to my uncertainty of ChatGPT giving wrong answers.
You decompose the problem further and then tell gpt it was wrong and what you know.
> Not an issue if youre just automating grunt work you know to do yourself of course. But I'm interested in when you arent familiar with a tool.
That's valid. I find that most people lump both "gruntwork you know how to do you are automating" and "not familiar with tool and using gpt to figure it out".
In truth, they both require very different strategies. I don't place a lot of importance and have a very high bar of proof required for letting it help me figure things out.
Essentially for figuring things out I make it give me examples I can independently verify to make sure I understand things together at a conceptual level.