Last Update: Jun 7, 2024

Coding with AI

I wrote a book! Check out A Quick Guide to Coding with AI.
Become a super programmer!
Learn how to use Generative AI coding tools as a force multiplier for your career.


So you want to run your own GPT on your own machine? Yeah me too.

So some LLaMA models are available (if you have a 24G video card) and other ways to do it, but GPT4All promises to be a different kind of solution. I saw this tweet and had to check it out.

So who runs a binary on their machine from some rando on the Internet? Usually, not me. However, Brian Roemmele appears to be a legit guy, and many people I respect follow him. So I decided to trust him and run this on my laptop. You might not. If so, run this in a VM or on Tails or something.

Here’s some more info on GPT4All.

From their website:

GPT4All Chat is a locally-running AI chat application powered by the GPT4All-J Apache 2 Licensed chatbot. The model runs on your computer’s CPU, works without an internet connection, and sends no chat data to external servers (unless you opt-in to have your chat data be used to improve future GPT4All models). It allows you to communicate with a large language model (LLM) to get helpful answers, insights, and suggestions. GPT4All Chat is available for Windows, Linux, and macOS.

You can get it for:

I decided to try it out in Arch Linux, because of course. Let’s roll.

Installation

The installation is super easy. Here are the steps.

I downloaded the file from here.

ran the following:

chmod +x gpt4all-0.1.0-Linux.run
./gpt4all-0.1.0-Linux.run

It pulls up this screen:

“GPT for Arch Linux”

Click next.

“GPT for Arch Linux”

Make a note of this folder. In my case, it was /opt/gpt4all 0.1.0 You’ll need this location to run it.

Select your components (only one for now)

“GPT for Arch Linux”

Accept the agreement (GNU Public License)

“GPT for Arch Linux”

And it’s ready to install.

“GPT for Arch Linux”

Grab your favorite beverage.

“GPT for Arch Linux”

In the tweet, it said 20 minutes, but it took less than 10 minutes (about 6) with my laptop with the following specs:

  • Intel Core i7-11800H 4600 MHz
  • 32G Memory
  • NVME

Your results will vary, but it didn’t take long at all.

“GPT for Arch Linux”

Partytimeexcellent. We’re done. Let’s dig in.

Running the GPT4All application.

So let’s head over to /opt/gpt4all 0.1.0

cd /opt/gpt4all\ 0.1.0/
cd bin
./chat

You’ll see this in your terminal:

“GPT for Arch Linux”

and it loads right up:

“GPT for Arch Linux”

Asking GPT4All a Question

My first question:

act as a super awesome professional Linux developer. Tell me how I can check for the last 50 system messages my kernel put out

“GPT for Arch Linux”

Awesome, it gave me an answer:

To check for the last 50 system messages your kernel put out, you can follow these steps:

1. Open your terminal on your Linux machine.
2. Type the command `dmesg -s' (with a lowercase "s"). This will give you a summary of all the system messages that your kernel has put out.
3. Look for the system messages that have the word "system" in them. You can do this by filtering the output of the command. For example, to only show system messages, type the command `dmesg -s | grep system`.
4. Once you have the list of system messages, you can scroll through them to find the last 50 messages. To do this, you can use the command `dmesg | tail -n 50`. This will show you the last 50 system messages.
5. You can also use the command `last 50 system messages` to get a list of the last 50 system messages.

Well, let’s try that out.

Verifying the Results

So I did step one, opening the terminal. Then step 2:

  1. Type the command `dmesg -s’ (with a lowercase “s”). This will give you a summary of all the system messages that your kernel has put out

So I ran that:

“GPT for Arch Linux”

Aww. Sad trumpet. While GPT4All was technically correct, you need to put in a buffer size:

“GPT for Arch Linux”

No big deal, perhaps it’s my fault. I’ll change the prompt a bit.

act as a super awesome professional Linux developer. Tell me how I can check for the last 50 system messages my kernel put out in Arch Linux.

This gives me a different result:

To check for the last 50 system messages in Arch Linux, you can follow these steps:

1. Open your terminal on your Linux machine.
2. Type the command `dmesg | tail -n 50 | grep "system"`. This will show you the last 50 system messages.
3. You can also use the command `last 50 system messages` to get a list of the last 50 system messages.

Note: The "system" keyword in the `dmesg` output should be enough to find the last 50 system messages. However, if you want to make sure that you have not missed any messages, you can also use the command `dmesg -k | grep "system"` to search for any kernel messages related to system activities.

Ok, cool. Let’s try that.

“GPT for Arch Linux”

Oops, guess I have to use sudo.

“GPT for Arch Linux”

Hey! It worked. I can forgive this small oversight simply because this is trained on text from the internet, and hardly ANY tutorials tell you to use sudo for this.

Moving on.

Let’s try

last 50 system messages

I’ve never in my life used this command, so let’s see:

“GPT for Arch Linux”

Ok, well, there is a message there about wtmp. This is a system log file that maintains records of user logins. I just powered on this machine, so this one makes sense.

However, I know that there are more kernel messages than just this one. So I’ll check the notes:

Note: The “system” keyword in the dmesg output should be enough to find the last 50 system messages. However, if you want to make sure that you have not missed any messages, you can also use the command dmesg -k | grep "system" to search for any kernel messages related to system activities.

Again, I do have to use sudo for this:

sudo dmesg -k | grep "system"

“GPT for Arch Linux”

Awesome. This is a kernel message too!. In fact, it’s a kernel log message related to filesystem synchronization. There are lots of them because it’s keeping track of synchronizations since I booted up (which wasn’t long ago).

And this particular command is one I’ve used quite a bit. So it’s nice to see it here as an answer.

Good job GPT4All.

Explain Your Answers

So let’s see how well it explains the answers it just gave to me.

explain in detail how "last 50 system messages" works

Since I’ve never used this command, I’m genuinely curious about this.

The answer:

The "last 50 system messages" command is used to show a list of the last 50 system messages that have been put out by the kernel on your Linux machine. To get this list, you can use the `dmesg` command with the `-s' option to search for system-related messages. This will give you a summary of all the system-related messages that your kernel has put out.

After running the `dmesg -s' command, you can scroll through the output of `dmesg` to find the last 50 system messages. These messages will have the keyword "system" in them, and will have been put out by the kernel. You can then use the `last 50 system messages` command to show you the last 50 system messages your kernel has put out.

If you need to search for system-related messages that have not been mentioned in the output, you can use the command `dmesg -k | grep "system"` to search for any messages related to system activities. This will give you a complete list of system-related messages, which you can then use to find the last 50 system messages.

Ok, so it appears to be an alias of `dmesg -s’ with some arbitrary value dropped in. Cool. Now I know.

Summary

Here is my brief experience with a “ChatGPT on the desktop”. Of course I will dig in much more. I admit I know absolutely nothing about how this works under the covers. It even says it works offline. I tried it. I disconnected my internet connection and sent this:

Briefly explain how large language models work so a middle schooler can understand it

“GPT for Arch Linux”

Ok, I’m officially stumped. As you can see in my screenshot, I have no internet connection, yet it still spits out an explanation.

How does it work? I have no idea.

Who is Multiplex and how are they making money? I have no idea.

My guess is, this is using some API to a GPT model, and some clever local caching stuff. They are probably releasing it for free but will charge later. This is fair. If I get great results from it, I’ll gladly pay a reasonable fee to use it.

So, I will continue hammering on this and see what I can get out of it. I’ll share my results here.

Thanks for reading!

Questions? Comments? Yell at Me!


Stay up to date on the latest in Computer Vision and AI.

Get notified when I post new articles!

Intuit Mailchimp




Published: Apr 14, 2023 by Jeremy Morgan. Contact me before republishing this content.