Testing JetBrains AI Assistant

The JetBrains AI Assistant logo above the logo text "JetBrains AI" in white on a black background.

At my current client, we’re evaluating AI assisted programming tools. I’m using JetBrains AI Assistant for the time being. Here’s a few notes and tips from my experiences using it.

Why JetBrains AI Assistant?

IntelliJ has been my main IDE for about a decade now, and the most tightly integrated AI Assistant for it, not surprisingly, is their very own AI Assistant. Github Copilot would be the other option. At the time I started my evaluation of AI Assistant, Microsoft was only just bringing their chat features into beta-testing in the IntelliJ plugin. However, since then Copilot’s chat has been made generally available in IntelliJ.

Context is King

The AI Assistant chat panel is the most flexible and powerful tool in the AI Assistant toolbox. It’s the one place where you can update context and iterate to improve the results generated by the AI. At least I haven’t found an obvious way to do this outside of a chat. Context happens to be essential to getting decent results. Exactly what context is available to the AI in any given chat, is not necessarily clear. It’s not exactly a black box, but it’s kind of dark gray. I have the Smart Chat mode enabled, which allows the IDE to feed new context to a chat automatically. You’ll sometimes see the IDE feeding the AI with context through temporary status messages in the chat pane. Since these messages don’t persist, and the update may not take long, it can be a bit of a struggle to catch exactly what will be available to the AI.

You’re supposed to be able to have the AI pick up code from other files automatically just by mentioning them, but your mileage may vary. It could be because I work on Clojure projects right now, but often I find that it’s not able to load namespaces into the chat. However, I did notice that it could scan code in open tabs from the status messages in the chat, and this is an easy way to bring code into the current chat context without copying and pasting it. Open the relevant file and tell the AI that it will find the code in your open tabs.

Bring Your Own Prompts

In the context menu of your code, you will find an AI Assistant sub-menu. By default you get options like refactoring and finding problems in the code. But guess what, you can write your own prompts that are added to this menu. To quickly jump to the right place in IntelliJ’s settings (Settings→Tools→AI Assistant→Prompt Library), you can select Add Your Prompts… from the context menu.

Screenshot of context menu shown in IntelliJ's editor. The AI Assistant sub-menu is open, with the last option Add Your Prompts... highlighted.

Now you can add a new prompt to the list, like the Write test prompt you can see in the context menu in the screenshot. Here you can write your prompt and inject the selected code using the variable $SELECTION. Here’s my very simple Write test prompt:

A screenshot of IntelliJ's Prompt Library settings. A prompt called Write test is selected containing the prompt: Write tests for the code: $SELECTION

Commit messages

You may have noticed that the list of prompts on the left side of the Prompt Library settings contains a built-in action, Commit Message Generation. Here you can customize the prompt IntelliJ uses for generating commit messages in the Commit tool window. If you typically write prompts in another language than English or want it formatted in a specific way, you can add this information to the prompt.

Even if I don’t use IntelliJ’s VCS tools, I think the AI Assistant does a good job of writing detailed commit messages. If you open the Commit tool window you’ll find a button to generate commit messages for staged changes just above the commit message input. It has the AI Assistant purple spiral logo on it. Pressing it will send the Commit Message Generation prompt and the staged changes to the AI Assistant which will generate a summary of those changes for the commit message.

Commit toolbar with the following controls from left to right: Amend commit checkbox, commit history button with a clock icon, and the AI Assistant's generate commit message button with a purple spiral icon.

I like to use this for larger changes, or if I’m just unsure what to write in the commit message. It might require some editing, but it does a good job of giving details you might not usually bother with. Commit messages are like documentation, you probably don’t care so much about what it says right now, but in a year or two when you run blame to find out just why you or someone else changed the code in a particular way, it’s really helpful.

Generating code

I mentioned that the real power tool in the AI Assistant is the chat, but sometimes all you want is to make some simple changes in a file. For this the code generation tool does a good job. It will generate a diff where you can select changes to merge into your code similar to IntelliJ’s VCS merge tool. If you aren’t satisfied with the job, you can change or add a new prompt, or you can try generating a new result with the same prompt.

A screenshot showing the controls for generated code.
The Specify button lets you update the prompt. The button next to it with the circular arrow will generate new results with the same prompt.

Generating documentation like I did for the screenshot above, is an example of things the code generation right in the editor will do a good job of. Refactoring is another example where I’m pretty satisfied with the results.

There are two variations of the code generation tool: Multi-line autocompletion and the full-blown code generator. Autocompletion doesn’t work for all languages, but you can enable experimental universal inline completion to try it in “unsupported languages”. After enabling this option, I had to restart IntelliJ and wait for indexing to complete before I got any suggestions. Results for Clojure were… Well, experimental. It would repeat outer clauses, defining a function within itself and such. It was easy enough to clean up after the fact, but still it seemed a bit weird. Results were mixed, but it did a good job when I had to edit a BASH script the other day.

A screenshot showing the general AI Assistant settings, with all options enabled.

One thing it did get right that I’ve seen Github Copilot struggle with, was generating complete S-expressions. Perhaps Copilot has gotten better at this, but it would typically stop generating code without closing S-expressions. It would usually generate a new suggestion to add closing parentheses after accepting the incomplete one first.

Is it worth it?

I think the value of Jetbrains AI Assistant is much higher with languages like Java, JavaScript/TypeScript and Kotlin. They continue to work on it, especially the context feeding, so it’s likely to improve over the coming months. I think the 7-day evaluation period Jetbrains has, is way too short. If you are deliberate in testing it, you might get a feel for whether you like it. It’s easy to forget that it’s there when automatic inline completion isn’t on. And using it consistently requires that you change your work habits a bit.

I’m not sure I’ll continue my subscription. Maybe I’ll try Github Copilot next, or just take a break and try it again next year. It has been helpful with refactoring code, though.

Leave a Comment

Your email address will not be published. Required fields are marked *

Abonner på vårt nyhetsbrev