DevTools around IntelliJ Platform

    Understanding how IntelliJ’s plugin ecosystem works

    • Custom Language support
      • File type recognition / Lexical analysis / Syntax highlighting / Formatting / Code insight and code completion / Inspections & quick-fixes / Intention actions
    • Framework integration
      • Consists of improved code insight features which are typical for a given framework, as well as the option to use framework-specific functionality directly from the IDE. Sometimes it also includes language support elements for a custom syntax or DSL (Specific code insight)
    • Tool integration
      • Implementation of additional actions / related UI components
      • Profilers, Obfuscators, Code Analyzer, Services, etc
    • User interface add-ons
      • Plugins in this category apply various changes to the standard user interface of the IDE

    Building Plugins

    Gradle-based development is the preferred solution for creating plugins (previously Devkit-based solution). The gradle-intellij-plugin plugin takes care of the dependencies of your plugin project - both the base IDE and other plugin dependencies, also provides tasks to run the IDE with your plugin.

    The runIde task will launch a Development Instance of the IDE with the plugin enabled. By default, the Gradle plugin will fetch and use the version of the JetBrains Runtime for the Development Instance corresponding to the version of the IntelliJ Platform used for building the plugin project. The JBR is an environment for running IntelliJ Platform-based IDEs on Windows, macOS, and Linux which has some modifications such as fixes for native crashes not present in official JDK builds.

    • Creating a plugin with New Project Wizard
    • Configuring Gradle for plugin
      • Gradle tasks: Setup DSL intelliJ { ... } / Running DSL runIde { ... } / Publishing DSL publishPlugin { ... }
      • Platform version: By default, the Gradle plugin will build against the IntelliJ Platform defined by the latest EAP snapshot of the IntelliJ IDEA CE. Explicitly setting the Setup DSL attributes intellij.version and intellij.type tells the Gradle plugin to use that configuration of the IntelliJ Platform to build the plugin project.
      • Plugin dependencies: intellij.plugins attributes so the Gradle plugin can fetch the required artifacts. Note that the plugin project is still required to declare these dependencies in its Plugin Configuration (plugin.xml) file.
      • Sandbox directory: intellij.sandboxDirectory sandbox directory to be used while running the plugin in an IDE Development Instance
    • Kotlin configuration: Use the kotlin-gradle-plugin for precise control over the Kotlin build process.
      • You can also use Kotlin to write Gradle build scripts build.gradle.kts as an alternative to build.gradle Groovy build script.

    Publish Plugins & Custom Plugin Repository

    Maintain a customized plugin repository other than the JetBrains Plugin Repository. Use a gradle property to identify different version

    • Background update: when projectComponent is init check for the latest version of the plugin. If there is a newer version, download to a temp location.
    • Plugin installation: class StartupActionScriptManager in the package com.intellij.ide.startup, which holds an interface ActionCommand (impl: CopyCommand, DeleteCommand, UnzipCommand). Call static method addActionCommand with an unzipCommand. The plugin zip should be unzipped to PathManager.getPluginsPath(). The BootstrapClassLoaderUtil would execute the action scripts when loading plugins.
    • Custom Repository (like the JetBrains Marketplace), API Endpoints:
      • listPlugins: filter plugins by channel / buildNumber
      • uploadPlugin: channels attributes is a part of the Publishing DSL, such as release / nightly
      • downloadPlugin: storage for plugin artifacts

    Plugin Structure

    Plugin Content

    The plugin jar file consists of the configuration file (META-INF/plugin.xml) and the classes that implement the plugin functionality. If plugin has dependencies, The plugin .jar file is placed in the /lib folder under the plugin’s “root” folder, together with all required bundled libraries. All jars from the /lib folder are automatically added to the classpath.

    Plugin Class Loaders

    A separate class loader is used to load the classes of each plugin. This allows each plugin to use a different version of a library, even if the same library is used by the IDE itself or by another plugin.

    By default, the main IDE class loader loads classes that were not found in the plugin class loader. However, in the plugin.xml file, you may use the <depends> element to specify that a plugin depends on one or more other plugins. In this case the class loaders of those plugins will be used for classes not found in the current plugin. This allows a plugin to reference classes from other plugins.

    Plugin Services

    A service is a plugin component loaded on demand when your plugin calls the getService() method of the ServiceManager class. The IntelliJ Platform ensures that only one instance of a service is loaded even though the service is called several times. A service must have an implementation class which is used for service instantiation. A service may also have an interface class which is used to obtain the service instance and provides API of the service.

    Distinct extension points:

    • com.intellij.applicationService - application level service
    • com.intellij.projectService - project level service
    <extensions defaultExtensionNs="com.intellij">
      <!-- Declare the application level service -->
      <applicationService serviceInterface="mypackage.MyApplicationService" serviceImplementation="mypackage.MyApplicationServiceImpl" />
      <!-- Declare the project level service -->
      <projectService serviceInterface="mypackage.MyProjectService" serviceImplementation="mypackage.MyProjectServiceImpl" />

    To provide custom implementation for test/headless environment, specify testServiceImplementation/headlessImplementation additionally. To improve startup performance, avoid any heavy initializations in the service constructor. (constructor injection is deprecated)

    To retrieve a service: Getting service doesn’t need read action and can be performed from any thread. If service is requested from several threads, it will be initialized in the first thread, and other threads will be blocked until service is fully initialized.

    MyApplicationService applicationService = ServiceManager.getService(MyApplicationService.class);
    MyProjectService projectService = project.getService(MyProjectService.class)

    Plugin Listeners

    Listeners allow plugins to declaratively subscribe to events delivered through the message bus. You can define both application- and project-level listeners. Could achieve better performance compared to registering listeners from code, because listener instances are created lazily (the first time an event is sent to the topic), and not during application startup or project opening.

    The topic attribute specifies the listener interface corresponding to the type of events you want to receive. Normally, this is the interface used as the type parameter of the Topic instance for the type of events. The class attribute specifies the class in your plugin that implements the listener interface and receives the events.

    As a specific example, if you want to receive events about all changes in the virtual file system, you need to implement the BulkFileListener interface, corresponding to the topic VirtualFileManager.VFS_CHANGES:

      <listener class="myPlugin.MyVfsListener" topic="com.intellij.openapi.vfs.newvfs.BulkFileListener"/>

    Then you provide the listener impl as a top-level class

    public class MyVfsListener implements BulkFileListener {
        public void after(@NotNull List<? extends VFileEvent> events) {
            // handle the events

    Plugin Component

    Plugin components are a legacy feature, which using <application-components>, <project-components> tags in a plugin.xml. Migration:

    • To manage some state or logic that is only needed when the user performs a specific operation, use a Service
    • To store the state of your plugin at the application or project level, use a Service and implement the PersistentStateComponent interface
    • To subscribe to events, use a Listener or create an extension for a dedicated extension point
    • Executing code on application startup (should be avoided), add a listener subscribing to the AppLifecycleListener topic
    • To execute code when a project is being opened, provide StartupActivity impl and register an extension for the com.intellij.postStartupActivity or com.intellij.backgroundPostStartupActivity EP
    • To execute code on project closing or application shutdown, implement the Disposable interface in a Service and place the code in the dispose() method

    What is the IntelliJ Platform?

    The IntelliJ Platform provides all of the infrastructure that these IDEs need to provide rich language tooling support. It provides a component driven, cross-platform JVM based application host with a high-level user interface toolkit for creating tool windows, tree views and lists (supporting fast search) as well as popup menus and dialogs.

    It includes a powerful full text editor, and provides abstract implementations of syntax highlighting, code folding, code completion, and other rich text editing features.

    Furthermore, it includes OpenAPIs to build common IDE functionality, such as a project model and a build system. It also provides infrastructure for a very rich debugging experience, with language agnostic advanced breakpoint support, call stacks, watch windows, and expression evaluation.

    But the IntelliJ Platform’s real power comes from the Program Structure Interface (PSI). This is a set of functionality that can be used to parse files and build rich syntactic and semantic models of the code, and to build indexes from this data. This powers a lot of functionality, from quick navigating to files, types and symbols, to the contents of code completion windows and find usages, code inspections and code rewriting, for quick fixes or refactorings, as well as many other features.

    The IntelliJ Platform includes parsers and a PSI model for a number of languages, and its extensible nature means that it is possible to add support for other languages.


    Products built on the IntelliJ Platform are extensible applications, with the platform being responsible for the creation of components, and the injection of dependencies into classes.

    Plugins can extend the platform in lots of ways, from adding a simple menu item to adding support for a complete language, build system and debugger. A lot of the existing functionality in the IntelliJ Platform is written as plugins that can be included or excluded depending on the needs of the end product.

    IDEs Based on the IntelliJ Platform

    The IntelliJ Platform underlies many JetBrains IDEs. IntelliJ IDEA Ultimate is a superset of the IntelliJ IDEA Community Edition, but includes closed source plugins. Similarly, other products such as WebStorm and DataGrip are based on the IntelliJ IDEA Community Edition, but with a different set of plugins included and excluding other default plugins. This allows plugins to target multiple products, as each product will include base functionality and a selection of plugins from the IntelliJ IDEA Community Edition repo.


    JetBrains Rider uses the IntelliJ Platform differently than other IntelliJ based IDEs. It uses the IntelliJ Platform to provide the user interface for a C# and .NET IDE, with the standard IntelliJ editors, tool windows, debugging experience and so on. It also integrates into the standard Find Usages and Search Everywhere UI, and makes use of code completion, syntax highlighting, and so on.

    However, Rider doesn’t create a full PSI (syntactic and semantic) model for C# files. Instead, it reuses ReSharper to provide language functionality. All of the C# PSI model and all inspections and code rewriting, such as quick fixes and refactorings are run out of process, in a command line version of ReSharper. This means that creating a plugin for Rider involves two parts - a plugin that lives in the IntelliJ “front end” to show user interface, and a plugin that lives in the ReSharper “back end” to analyze and work with the C# PSI.

    Fortunately, many plugins can simply work with the ReSharper backend - Rider takes care of displaying the results of inspections and code completion, and many plugins can be written that don’t require an IntelliJ UI component.

    Base Platform

    Essential concepts

    • Component model - the IntelliJ Platform is a component based application, and is responsible for creating components and injecting dependencies.
    • Virtual files - all file access should go through the VFS which abstracts and caches the file system.
    • Code model - the IntelliJ Platform’s code model is called the PSI - the Program Structure Interface. The PSI parses code, builds indexes and creates a semantic model.
    • Extension points - extensions are the most common way for a plugin to extend the functionality of the IntelliJ Platform, and most features and services can be extended.
      • Actions - menu and toolbar items, such as com.intellij.toolwindow EP allows plugins to add toolWindows
      • Code inspections - code analysis that looks at the syntax trees and semantic models and highlight issues in the editor
      • Intentions - context specific actions that are available in the Alt+Enter menu when the text caret is at a certain location.

    Messaging infrastructure

    Implementation of Observer Pattern that provides additional features like broadcasting on hierarchy and special nested events processing (nested event here is a situation when new event is fired directly or indirectly from the callback of another event)


    This class serves as an endpoint at the messaging infrastructure. I.e. clients are allowed to subscribe to the topic within particular bus and to send messages to particular topic within particular bus.

    • display name just a human-readable name used for logging/monitoring purposes;
    • broadcast direction will be explained in details at Broadcasting. Default value is TO_CHILDREN;
    • listener class that is a business interface for particular topic. Subscribers register implementation of this interface at the messaging infrastructure and publishers may later retrieve object that conforms (IS-A) to it and call any method defined there. Messaging infrastructure takes care on dispatching that to all subscribers of the topic, i.e. the same method with the same arguments will be called on the registered callbacks;

    Message Bus



    UI Components


    Tool windows are child windows of the IDE used to display information. These windows generally have their own toolbars (referred to as tool window bars) along the outer edges of the main window containing one or more tool window buttons, which activate panels displayed on the left, bottom and right sides of the main IDE window.

    To manage the contents of a tool window, call ToolWindow.getContentManager(). To add a tab (content), first create it by calling ContentManager.getFactory().createContent(), and then to add it to the toolwindow using ContentManager.addContent().

    To create a plugin that displays a custom tool window, perform the following steps:

    • In a plugin project, create a class implementing ToolWindowFactory
    • In this class, override the createToolWindowContent method. This method specifies the content for the tool window
    • In plugin.xml, create the <extensions defaultExtensionNs="com.intellij">...</extensions> section
    • To this section, add the <toolWindow> element


    The DialogWrapper is the base class which is supposed to be used for all modal dialogs (and some non-modal dialogs) shown in IntelliJ Platform plugins.

    • Call the base class constructor and provide either a project in the frame of which the dialog will be displayed, or a parent component for the dialog.
    • Call the init() method from the constructor of your dialog class
    • Call the setTitle() method to set the title for the dialog box
    • Implement the createCenterPanel() method to return the component comprising the main contents of the dialog.
    • Optional: Override the getPreferredFocusedComponent() method and return the component that should be focused when the dialog is first displayed.
    • Optional: Override the getDimensionServiceKey() method to return the identifier which will be used for persisting the dialog dimensions.
    • Optional: Override the getHelpId() method to return the context help topic associated with the dialog.

    The DialogWrapper class is often used together with UI Designer forms. In this case, you bind a UI Designer form to your class extending DialogWrapper, bind the top-level panel of the form to a field and return that field from the createCenterPanel() method.

    To display the dialog, you call the show() method and then use the getExitCode() method to check how the dialog was closed. The showAndGet() method can be used to combine these two calls.

    To customize the buttons displayed in the dialog, you can override either the createActions() or createLeftActions() methods. Both of these methods return an array of Swing Action objects. If the button that you’re adding closes the dialog, you can use DialogWrapperExitAction, as the base class for your action. Use action.putValue(DialogWrapper.DEFAULT_ACTION, true) to set the default button.

    To validate the data entered into the dialog, you can override the doValidate() method. The method will be called automatically by timer. If the currently entered data is valid, you need to return null from your implementation. Otherwise, you need to return a ValidationInfo object which encapsulates an error message and an optional component associated with the invalid data. If you specify a component, an error icon will be displayed next to it, and it will be focused when the user tries to invoke the OK action.


    Semi-modal windows that disappear automatically on focus loss. Popups can optionally display a title, are optionally movable and resizable (and support remembering their size), and can be nested (show another popup when an item is selected).

    The JBPopupFactory interface allows you to create popups that display different kinds of components, depending on your specific needs. The most commonly used methods are:

    • createComponentPopupBuilder() is the most generic one, allowing you to show any Swing component in the popup.
    • createPopupChooserBuilder() creates a popup for choosing one or more items from a plain java.util.List
    • createConfirmation() creates a popup for choosing between two options, and performing different actions depending on which option is selected.
    • createActionGroupPopup() creates a popup which shows the actions from an action group and executes the action selected by the user.

    If you need to create a list-like popup which is more flexible than a simple JList but don’t want to represent the possible choices as actions in an action group, you can work directly with the ListPopupStep interface and the JBPopupFactory.createListPopup() method. Normally you don’t need to implement the entire interface; instead, you can derive from the BaseListPopupStep class. The key methods to override are getTextFor() (returning the text to display for an item) and onChosen() (called when an item is selected). By returning a new popup step from the onChosen() method, you can implement hierarchical (nested) popups.

    Once you’ve created the popup, you need to display it by calling one of the show() methods. You can let the IntelliJ Platform automatically choose the position based on the context, by calling showInBestPositionFor(), or specify the position explicitly through methods like showUnderneathOf() and showInCenterOf().


    One of the leading design principles is avoiding the use of modal message boxes for notifying the user about errors and other situations that may warrant the user’s attention. As a replacement, the IntelliJ Platform provides multiple non-modal notification UI options:

    • Editor Hints: For actions invoked from the editor (such as refactorings, navigation actions and different code insight features), the best way to notify the user about the inability to perform an action is to use the HintManager class. Its method showErrorHint() displays a floating popup above the editor which is automatically hidden when the user starts performing another action in the editor.
    • Notifications: advantages
      • The user can control the way each notification type is displayed under Settings | Appearance & Behavior | Notifications
      • All displayed notifications are gathered in the Event Log tool window and can be reviewed later
      • The text of the notification can include HTML tags

    File and Class Choosers

    • Via Dialog
      • To let a user choose a file, directory or multiple files, use the FileChooser.chooseFiles() method. This method has multiple overloads. The best method to use is the one which returns void and takes a callback receiving the list of selected files as a parameter.
      • The FileChooserDescriptor class allows you to control which files can be selected. The constructor parameters specify whether files and/or directories can be selected, and whether multiple selection is allowed
    • Via TextField
      • To use a text field for entering the path with an ellipsis button (“…”) for showing the file chooser. To create such a control, use the TextFieldWithBrowseButton component and call the addBrowseFolderListener() method on it to set up the file chooser. As an added bonus, this will enable filename completion when entering paths in the text box.
    • Via Tree
      • An alternative UI for selecting files, which works best when the most common way of selecting a file is by typing its name, is available through the TreeFileChooserFactory class.
    • Class and Package Choosers
      • To select a Java class, you can use the TreeClassChooserFactory class. Its different methods allow you to specify the scope from which the classes are taken, to restrict the choice to descendants of a specific class or implementations of an interface, and to include or exclude inner classes from the list. For choosing a Java package, you can use the PackageChooserDialog class.

    Editor Components

    Compared to Swing JTextArea, the IntelliJ Platform’s editor component has a ton of advantages: syntax highlighting support, code completion, code folding and much more. IntelliJ Platform editors are normally displayed in editor tabs, but they can be embedded in dialogs or tool windows, too. This is enabled by the EditorTextField component.

    When creating an EditorTextField, you can specify the following attributes:

    • The file type according to which the text in the text field is parsed
    • Whether the text field is read-only or editable
    • Whether the text field is single-line or multiline
    // A common use case for EditorTextField is entering the name of a Java class or package.
    PsiFile psiFile = PsiDocumentManager.getInstance(editor.getProject()).getPsiFile(editor.getDocument());
    PsiElement element = psiFile.findElementAt(editor.getCaretModel().getOffset());
    // create a code fragment representing the class or package name
    PsiExpressionCodeFragment code = JavaCodeFragmentFactory.getInstance(editor.getProject()).createExpressionCodeFragment("", element, null, true);
    // get the document corresponding to the code fragment
    Document document = PsiDocumentManager.getInstance(editor.getProject()).getDocument(code);
    // Pass the returned document to the EditorTextField constructor or its setDocument() method.
    EditorTextField myInput = new EditorTextField(document, editor.getProject(), JavaFileType.INSTANCE);

    List and Tree

    Whenever you would normally use a standard Swing JList component, it’s recommended to use the JBList class as drop-in replacement. JBList supports the following additional features on top of JList:

    • Drawing a tooltip with complete text of an item if the item doesn’t fit into the list box width
    • Drawing a gray text message in the middle of the list box when it contains no items. The text can be customized by calling getEmptyText().setText()
    • Drawing a busy icon in the top right corner of the list box to indicate that a background operation is being performed. This can be enabled by calling setPaintBusy()

    Similarly, the Tree class provides a replacement for the standard JTree class. In addition to the features of JBList, it supports wide selection painting (Mac style) and auto-scroll on drag & drop.

    • To customize the presentation of items in a list box or a tree: ColoredListCellRenderer and ColoredTreeCellRenderer. These classes allow you to compose the presentation out of multiple text fragments with different attributes by calling append() and to set an optional icon for the item by calling setIcon.
    • To facilitate keyboard-based selection of items in a list box or a tree, you can install a speed search handler on it using the ListSpeedSearch and TreeSpeedSearch
    • A very common task in plugin development is showing a list or a tree where the user is allowed to add, remove, edit or reorder the items. The implementation of this task is greatly facilitated by the ToolbarDecorator class. This class provides a toolbar with actions on items and automatically enables drag & drop reordering of items in list boxes if supported by the underlying list model.

    JCEF - Java Chromium Embedded Framework

    JCEF is a Java port of CEF framework for embedding Chromium-based browsers in applications using Swing (experimental feature).

    Action System

    An action is a class derived from the AnAction class, whose actionPerformed() method is called when the menu item or toolbar button is selected. Actions are the most common way for a user to invoke the functionality of your plugin. An action can be invoked from a menu or a toolbar, using a keyboard shortcut, or from the Find Action interface.

    IntelliJ has its own EDT with DataManager and actions

    • JVM listens devices such as keyboard and mouses
    • Converts OS events to Java events
    • Event Dispatch Thread (EDT) dispatches events

    An action’s method AnAction.update() is called by the IntelliJ Platform framework to update the state of an action. The state (enabled, visible) of an action determines whether the action is available in the UI of an IDE. An object of type AnActionEvent is passed to this method, and it contains the information about the current context for the action. Actions are made available by changing state in the Presentation object associated with the event context. It is vital that update() methods execute quickly and return execution to the IntelliJ Platform.

    Action Context

    The AnActionEvent object passed to update() carries information about the current context for the action. Context information is available from the methods of AnActionEvent, providing information such as the Presentation, and whether the action is triggered from a Toolbar. Additional context information is available using the method AnActionEvent.getData(). Keys defined in CommonDataKeys are passed to the getData() method to retrieve objects such as Project, Editor, PsiFile, and other information. Accessing this information is relatively light-weight and is suited for AnAction.update().

    The code that executes in the AnAction.actionPerformed() method should execute efficiently, but it does not have to meet the same stringent requirements as the update() method. It can modify, remove, or add PSI elements to a file open in the editor.


    For every place where the action appears, a new Presentation is created. Therefore the same action can have different text or icons when it appears in different places of the user interface. Different presentations for the action are created by copying the Presentation returned by the AnAction.getTemplatePresentation() method.

    Persisting Data / State

    Using PropertiesComponent for simple non-roamable persistence

    To persist a few simple values, the easiest way to do so is to use the com.intellij.ide.util.PropertiesComponent service. It can be used for saving both application-level values and project-level values (stored in the workspace file). Roaming is disabled for PropertiesComponent, so use it only for temporary, non-roamable properties.

    Use the PropertiesComponent.getInstance() method for storing application-level values, and the PropertiesComponent.getInstance(Project) method for storing project-level values.

    Using PersistentStateComponent

    The com.intellij.openapi.components.PersistentStateComponent interface gives you the most flexibility for defining the values to be persisted, their format, and storage location. To use it:

    • mark a service as implementing the PersistentStateComponent interface
    • define the state class
    • specify the storage location using @com.intellij.openapi.components.State

    The implementation of PersistentStateComponent needs to be parameterized with the type of the state class. The state class can either be a separate JavaBean class, or the class implementing PersistentStateComponent itself.

    class MyService implements PersistentStateComponent<MyService> {
      public String stateValue;
      public MyService getState() {
        return this;
      public void loadState(MyService state) {
        XmlSerializerUtil.copyBean(state, this);
    • The implementation of PersistentStateComponent works by serializing public fields, annotated private fields and bean properties into an XML format.
    • The following types of values can be persisted: numbers (both primitive types, such as int, and boxed types, such as Integer) / booleans / strings / collections / maps / enums. For other types, extend com.intellij.util.xmlb.Converter
    • To exclude a public field or bean property from serialization (eg. for password storage), annotate the field or getter with @com.intellij.util.xmlb.annotations.Transient.
    • State class should have a default constructor and an equals method. Using Kotlin Data class.
    • To specify where exactly the persisted values will be stored, you need to add a @State annotation to the PersistentStateComponent class.

    Persistent component lifecycle

    • The loadState() method is called after the component has been created (only if there is some non-default state persisted for the component), and after the XML file with the persisted state is changed externally (for example, if the project file was updated from the version control system). In the latter case, the component is responsible for updating the UI and other related components according to the changed state.
    • The getState() method is called every time the settings are saved (for example, on frame deactivation or when closing the IDE). If the state returned from getState() is equal to the default state (obtained by creating the state class with a default constructor), nothing is persisted in the XML. Otherwise, the returned state is serialized in XML and stored.

    Using PasswordSafe to work with credentials

    Retrieve stored credentials

    String key = null; // e.g. serverURL, accountID
    CredentialAttributes credentialAttributes = createCredentialAttributes(key);
    Credentials credentials = PasswordSafe.getInstance().get(credentialAttributes);
    if (credentials != null) {
      String password = credentials.getPasswordAsString();
    // or get password only
    String password = PasswordSafe.getInstance().getPassword(credentialAttributes);
    private CredentialAttributes createCredentialAttributes(String key) {
        return new CredentialAttributes(CredentialAttributesKt.generateServiceName("MySystem", key));

    Store credentials

    CredentialAttributes credentialAttributes = createCredentialAttributes(serverId); // see previous sample
    Credentials credentials = new Credentials(username, password);
    PasswordSafe.getInstance().set(credentialAttributes, credentials);


    OS Storage
    Windows File in KeePass format
    macOS Keychain using Security Framework
    Linux Secret Service API using libsecret



    The virtual file system (VFS) is a component of IntelliJ Platform that encapsulates most of its activity for working with files. It serves the following main purposes:

    • Providing a universal API for working with files regardless of their actual location (on disk, in archive, on a HTTP server etc.)
    • Tracking file modifications and providing both old and new versions of the file content when a modification is detected.
    • Providing a possibility to associate additional persistent data with a file in the VFS.

    The VFS manages a persistent snapshot of some of the contents of the user’s hard disk. The snapshot stores only those files which have been requested at least once through the VFS API, and is asynchronously updated to match the changes happening on the disk. The snapshot is application level, not project level - so, if some file (for example, a class in the JDK) is referenced by multiple projects, only one copy of its contents will be stored in the VFS.

    All VFS access operations go through the snapshot. If some information is requested through the VFS APIs and is not available in the snapshot, it is loaded from disk and stored into the snapshot. If the information is available in the snapshot, the snapshot data is returned. The contents of files and the lists of files in directories are stored in the snapshot only if that specific information was accessed - otherwise, only file metadata like name, length, timestamp, attributes is stored.

    The snapshot is updated from disk during refresh operations, which generally happen asynchronously. All write operations made through the VFS are synchronous - i.e. the contents is saved to disk immediately. A refresh operation synchronizes the state of a part of the VFS with the actual disk contents. Refresh operations are explicitly invoked by the IntelliJ Platform or plugin code - i.e. when a file is changed on disk while the IDE is running, the change will not be immediately picked up by the VFS. The VFS will be updated during the next refresh operation which includes the file in its scope.

    IntelliJ Platform refreshes the entire project contents asynchronously on startup. By default, it performs a refresh operation when the user switches to it from another app, but users can turn this off via Settings | Appearance & Behavior | System Settings | Synchronize files on frame activation .

    On Windows, Mac and Linux, IntelliJ Platform starts a native file watcher process that receives file change notifications from the file system and reports them to IntelliJ Platform. If a file watcher is available, a refresh operation looks only at the files that have been reported as changed by the file watcher. If no file watcher is present, a refresh operation walks through all directories and files in the refresh scope.

    Refresh operations are based on file timestamps. If the contents of a file was changed but its timestamp remained the same, IntelliJ Platform will not pick up the updated contents.

    There is currently no facility for removing files from the snapshot. If a file was loaded there once, it remains there forever unless it was deleted from the disk and a refresh operation was called on one of its parent directories.

    The VFS itself does not honor ignored files and excluded folders. If the application code accesses them, the VFS will load and return their contents. In most cases, the ignored files and excluded folders must be skipped from processing by higher level code.

    During the lifetime of a running instance of an IntelliJ Platform IDE, multiple VirtualFile instances may correspond to the same disk file. They are equal, have the same hashCode and share the user data.

    In nearly all cases, using asynchronous refreshes is strongly preferred. If there is some code that needs to be executed after the refresh is complete, the code should be passed as a postRunnable parameter to one of the refresh methods:

    • RefreshQueue.createSession()
    • VirtualFile.refresh()

    All changes happening in the virtual file system, either as a result of refresh operations or caused by user’s actions, are reported as virtual file system events. VFS events are always fired in the event dispatch thread, and in a write action. The most efficient way to listen to VFS events is to implement the BulkFileListener interface and to subscribe with it to the VirtualFileManager.VFS_CHANGES topic.

    Virtual File

    A virtual file VirtualFile is the IntelliJ Platform’s representation of a file in a file system (VFS). Most commonly, a virtual file is a file in your local file system. However, the IntelliJ Platform supports multiple pluggable file system implementations, so virtual files can also represent classes in a JAR file, old revisions of files loaded from a version control repository, and so on.

    The VFS level deals only with binary content. You can get or set the contents of a VirtualFile as a stream of bytes, but concepts like encodings and line separators are handled on higher system levels. (e.g. Document)

    How do I get a virtual file?

    • From an action: e.getData(PlatformDataKeys.VIRTUAL_FILE). If you are interested in multiple selection, you can also use e.getData(PlatformDataKeys.VIRTUAL_FILE_ARRAY)
    • From a path in the local file system: LocalFileSystem.getInstance().findFileByIoFile()
    • From a PSI file: psiFile.getVirtualFile() (may return null if the PSI file exists only in memory)
    • From a document: FileDocumentManager.getInstance().getFile()

    What can I do with it?

    • Typical file operations are available, such as traverse the file system, get file contents, rename, move, or delete
    • Recursive iteration should be performed using VfsUtilCore.iterateChildrenRecursively to prevent endless loops caused by recursive symlinks

    Where does it come from?

    • The VFS is built incrementally, by scanning the file system up and down starting from the project root. New files appearing in the file system are detected by VFS refreshes
    • A refresh operation can be initiated programmatically using VirtualFileManager.getInstance().refresh() or VirtualFile.refresh()
    • VFS refreshes are also triggered whenever file system watchers receive file system change notifications (available on the Windows and Mac operating systems)

    How long does a virtual file persist?

    • A particular file on disk is represented by equal VirtualFile instances for the entire lifetime of the IDEA process. There may be several instances corresponding to the same file, and they can be garbage-collected. The file is a UserDataHolder, and the user data is shared between those equal instances. If a file is deleted, its corresponding VirtualFile instance becomes invalid (the isValid() method returns false and operations cause exceptions).

    How do I create a virtual file?

    • Usually you don’t. As a rule, files are created either through the PSI API or through the regular API

    Are there any utilities for analyzing and manipulating virtual files?

    • VfsUtil and VfsUtilCore provide utility methods for analyzing files in the VFS
    • You can use ProjectLocator to find the projects that contain a given virtual file

    How do I extend VFS?

    • To provide an alternative file system implementation (for example, an FTP file system), implement the VirtualFileSystem class, and register your implementation as an application component
    • To hook into operations performed in the local file system (for example, if you are developing a version control system integration that needs custom rename/move handling), implement the LocalFileOperationsHandler interface and register it through the LocalFileSystem.registerAuxiliaryFileOperationsHandler method


    A document is an editable sequence of Unicode characters, which typically corresponds to the text contents of a virtual file. Line breaks in a document are always normalized to \n. The IntelliJ Platform handles encoding and line break conversions when loading and saving documents transparently.

    How do I get a document?

    • From an action: e.getData(PlatformDataKeys.EDITOR).getDocument()
    • From a virtual file: FileDocumentManager.getDocument(). This call forces the document content to be loaded from disk if it wasn’t loaded previously
    • From a PSI file: PsiDocumentManager.getInstance().getDocument() or PsiDocumentManager.getInstance().getCachedDocument()

    What can I do with a Document?

    • You may perform any operations that access or modify the file contents on “plain text” level (as a sequence of characters, not as a tree of Java elements).

    Where does a Document come from?

    • Document instances are created when some operation needs to access the text contents of a file (in particular, this is needed to build the PSI for a file)
    • Also, document instances not linked to any virtual files can be created temporarily, for example, to represent the contents of a text editor field in a dialog

    How long does a Document persist?

    • Document instances are weakly referenced from VirtualFile instances. Thus, an unmodified Document instance can be garbage-collected if it isn’t referenced by anyone, and a new instance will be created if the document contents is accessed again later. Storing Document references in long-term data structures of your plugin will cause memory leaks.

    How do I create a Document?

    • If you need to create a new file on disk, you don’t create a Document: you create a PSI file and then get its Document
    • If you need to create a Document instance which isn’t bound to anything, you can use EditorFactory.createDocument

    How do I get notified when Documents change?

    • Document.addDocumentListener allows you to receive notifications about changes in a particular Document instance
    • EditorFactory.getEventMulticaster().addDocumentListener allows you to receive notifications about changes in all open documents
    • Subscribe to AppTopics#FILE_DOCUMENT_SYNC on any level bus to receive notifications when any Document is saved or reloaded from disk

    What are the rules of working with Documents?

    • The general read/write action rules are in effect. In addition to that, any operations which modify the contents of the document must be wrapped in a command (CommandProcessor.getInstance().executeCommand()). executeCommand() calls can be nested, and the outermost executeCommand call is added to the undo stack. If multiple documents are modified within a command, undoing this command will by default show a confirmation dialog to the user.
    • If the file corresponding to a Document is read-only (for example, not checked out from the version control system), document modifications will fail. Thus, before modifying the Document, it is necessary to call ReadonlyStatusHandler.getInstance(project).ensureFilesWritable() to check out the file if necessary.
    • All text strings passed to Document modification methods (setText, insertString, replaceString) must use only \n as line separators.

    Are there any utilities available for working with Documents?

    • DocumentUtil contains utility methods for Document processing. This allows you to get information like the text offsets of particular lines. This is particularly useful when you need text location/offset information about a given PsiElement.


    Work with text (Different from PSI). Using the action system (create a menu action registered in the group EditorPopupMenu) to access a caret placed in a document open in an editor.

    public class EditorIllustrationAction extends AnAction {
      public void update(@NotNull final AnActionEvent e) {
        //Get required data keys
        final Project project = e.getProject();
        final Editor editor = e.getData(CommonDataKeys.EDITOR);
        //Set visibility only in case of existing project and editor and if a selection exists
        e.getPresentation().setEnabledAndVisible(project != null && editor != null && editor.getSelectionModel().hasSelection());

    Handling actions activated by keystroke events in the editor. such as

    • Gain access to the document.
    • Get the character locations defining the selection.
    • Safely replace the contents of the selection.

    Modifying the selected text requires an instance of the Document object, which is accessed from the Editor object. The Document represents the contents of a text file loaded into memory and opened in an IntelliJ Platform-based IDE editor. An instance of the Document will be used later when a text replacement is performed. The text replacement will also require information about where the selection is in the document, which is provided by the primary Caret object, obtained from the CaretModel. Selection information is measured in terms of Offset, the count of characters from the beginning of the document to a caret location. Text replacement could be done by calling the Document object’s replaceString() method. However, safely replacing the text requires the Document to be locked and any changes performed in a write action.

    public class EditorIllustrationAction extends AnAction {
        public void actionPerformed(@NotNull final AnActionEvent e) {
          // Get all the required data from data keys
          final Editor editor = e.getRequiredData(CommonDataKeys.EDITOR);
          final Project project = e.getRequiredData(CommonDataKeys.PROJECT);
          final Document document = editor.getDocument();
          // Work off of the primary caret to get the selection info
          Caret primaryCaret = editor.getCaretModel().getPrimaryCaret();
          int start = primaryCaret.getSelectionStart();
          int end = primaryCaret.getSelectionEnd();
          // Replace the selection with a fixed string.
          // Must do this document change in a write action context.
          WriteCommandAction.runWriteCommandAction(project, () -> document.replaceString(start, end, "editor_basics"));
          // De-select the text range that was just replaced

    Run Configurations

    Run Configuration Management

    Configuration type

    The list of available configuration types is shown when a user opens the “Edit run configurations” dialog and executes “Add” action. Every type there is represented as an instance of ConfigurationType and registered like below:

    <configurationType implementation="org.jetbrains.plugins.gradle.service.execution.GradleExternalTaskConfigurationType" />

    The easiest way to implement this interface is to use the ConfigurationTypeBase base class. In order to use it, you need to inherit from it and to provide the configuration type parameters (ID, name, description and icon) as constructor parameters. In addition to that, you need to call the addFactory() method to add a configuration factory.

    Configuration factory

    All run configurations are created by the ConfigurationFactory registered for a particular ConfigurationType. It’s possible that one ConfigurationType has more than one ConfigurationFactory

    The key API of ConfigurationFactory, and the only method that you’re required to implement, is the createTemplateConfiguration method. This method is called once per project to create the template run configuration.

    All real run configurations (loaded from the workspace or created by the user) are called by cloning the template through the createConfiguration method.

    Run configuration

    The run configuration itself is represented by the RunConfiguration interface. A “run configuration” here is some named profile which can be executed, e.g. application started via main() class, test, remote debug to particular machine/port etc.

    When implementing a run configuration, you may want to use one of the common base classes:

    • RunConfigurationBase is a general-purpose superclass that contains the most basic implementation of a run configuration.
    • LocatableConfigurationBase is a common base class that should be used for configurations that can be created from context by a RunConfigurationProducer. It supports automatically generating a name for a configuration from its settings and keeping track of whether the name was changed by the user.
    • ModuleBasedConfiguration is a base class for a configuration that is associated with a specific module (for example, Java run configurations use the selected module to determine the run classpath).

    Settings editor

    RunConfiguration-specific UI is handled by SettingsEditor:

    • getComponent() method is called by the IDE and shows run configuration specific UI
    • resetFrom() is called to discard all non-confirmed user changes made via that UI
    • applyTo() is called to confirm the changes, i.e. copy current UI state into the target settings object


    That run configuration settings are persistent, i.e. they are stored at file system and loaded back on the IDE startup. That is performed via writeExternal() and readExternal() methods of RunConfiguration class correspondingly. The actual configurations stored by the IntelliJ Platform are represented by instances of the RunnerAndConfigurationSettings class, which combines a run configuration with runner-specific settings, as well as keeping track of certain run configuration flags such as “temporary” or “singleton”. Dealing with instances of this class becomes necessary when you need to create run configurations from code. This is accomplished:

    • RunManager.createConfiguration() creates an instance of RunnerAndConfigurationSettings
    • RunManager.addConfiguration() makes it persistent by adding it to either the list of shared configurations stored in a project, or to the list of local configurations stored in the workspace file

    Refactoring support

    Most run configurations contain references to classes, files or directories in their settings, and these settings usually need to be updated when the corresponding element is renamed or moved. In order to support that, your run configuration needs to implement the RefactoringListenerProvider interface.

    In your implementation of getRefactoringElementListener(), you need to check whether the element being refactored is the one that your run configuration refers to, and if it is, you return a RefactoringElementListener that updates your configuration according to the new name and location of the element.


    The standard execution of a run action goes through the following steps:

    • The user selects a run configuration and an executor
    • The program runner that will actually execute the process is selected, by polling all registered program runners and asking whether they can run the specified run profile with the specified executor ID
    • The ExecutionEnvironment object is created. This object aggregates all the settings required to execute the process, as well as the selected ProgramRunner
    • ProgramRunner.execute() is called, receiving the executor and the execution environment

    Implementations of ProgramRunner.execute() go through the following steps to execute the process:

    • RunProfile.getState() method is called to create a RunProfileState object, describing a process about to be started. At this stage, the command line parameters, environment variables and other information required to start the process is initialized
    • RunProfileState.execute() is called. It starts the process, attaches a ProcessHandler to its input and output streams, creates a console to display the process output, and returns an ExecutionResult object aggregating the console and the process handler
    • The RunContentBuilder object is created and invoked to display the execution console in a tab of the Run or Debug tool window


    The Executor interface describes a specific way of executing any possible run configuration. The three default executors provided by the IntelliJ Platform by default are Run, Debug and Run with Coverage. Each executor gets its own toolbar button, which starts the selected run configuration using this executor, and its own context menu item for starting a configuration using this executor.

    Running a process

    The RunProfileState interface comes up in every run configuration implementation as the return value RunProfile.getState(). It describes a process which is ready to be started and holds the information like the command line, current working directory, and environment variables for the process to be started. (The existence of RunProfileState as a separate step in the execution flow allows run configuration extensions and other components to patch the configuration and to modify the parameters before it gets executed.)

    The standard base class used as implementation of RunProfileState is CommandLineState. It contains the logic for putting together a running process and a console into an ExecutionResult, but doesn’t know anything how the process is actually started. For starting the process, it’s best to use the GeneralCommandLine class, which takes care of setting up the command line parameters and executing the process. Alternatively, if the process you need to run is a JVM-based one, you can use the JavaCommandLineState base class. It knows about the command line parameters of the JVM and can take care of details like calculating the classpath for the JVM.

    To monitor the execution of a process and capture its output, the OSProcessHandler class is normally used. Once you’ve created an instance of OSProcessHandler from either a command line or a Process object, you need to call the startNotify() method to start capturing its output. You may also want to attach a ProcessTerminatedListener to the OSProcessHandler, so that the exit status of the process will be displayed in the console.

    Displaying the process output

    If you’re using CommandLineState, a console view will be automatically created and attached to the output of the process. Alternatively, you can arrange this yourself:

    • TextConsoleBuilderFactory.createBuilder(project).getConsole() creates a ConsoleView instance
    • ConsoleView.attachToProcess() attaches it to the output of a process
    • If the process you’re running uses ANSI escape codes to color its output, the ColoredProcessHandler class will parse it and display the colors in the IntelliJ console

    Console filters allow you to convert certain strings found in the process output to clickable hyperlinks. To attach a filter to the console, use CommandLineState.addConsoleFilters() or TextConsoleBuilder.addFilter()



    A FilePath represents a path to a file or directory on disk or in the VCS repository. FilePath representing paths in a VCS repository, rather than local paths, are created using VcsContextFactory.createFilePathOnNonLocal()


    A ContentRevision represents a particular revision of a file, which exists either locally or in a VCS repository. It has three main attributes:

    • FilePath specifying the file of which this is a revision. If some version of the file exists locally, this should be a local path
    • VcsRevisionNumber specifying the revision number of the revision, or VcsRevisionNumber.NULL if the revision exists only locally
    • Content of the revision


    A Change represents a single file operation (creation, modification, move/rename or deletion) from a VCS point of view. A Change can represent either a modification which the user has performed locally and not yet committed, a committed modification, or some other type of modification (for example, a shelved change or a difference between two arbitrary revisions).


    A ChangeList represents a named group of related changes:

    • LocalChangeList represents a group of modifications done by a user locally. If the VCS also supports the concept of changelists (like Perforce does), the VCS plugin can synchronize IDEA’s local changelist structure with that of the VCS. Otherwise, a local changelist is simply a subset of the files checked out or modified by the user.
    • CommittedChangeList represents a set of modifications checked in to the VCS repository. For VCSes which support atomic commit, every committed revision is represented by a CommittedChangeList. For VCSes which use per-file commit (like CVS), the plugin can use heuristics to group a sequence of individual file commits into a CommittedChangeList

    Project Structure

    Project and Its Components

    • Project: In the IntelliJ Platform, a project encapsulates all of a project’s source code, libraries, and build instructions into a single organizational unit. Everything done using the IntelliJ Platform SDK is done within the context of a project. A project defines collections referred to as modules and libraries. Depending on the logical and functional requirements for the project, you can create a single-module or a multi-module project.
    • Module: A module is a discrete unit of functionality that can be run, tested, and debugged independently. Modules include such things as source code, build scripts, unit tests, deployment descriptors, etc. In a project, each module can use a specific SDK or inherit the SDK defined at the project level. A module can depend on other modules of the project.
    • Library: A library is an archive of compiled code (such as JAR files) on which modules depend.
      • Module Library: the library classes are visible only in this module and the library information is recorded in the module’s .iml file.
      • Project Library: the library classes are visible within the project and the library information is recorded in the project’s .ipr file or in .idea/libraries.
      • Global Library: the library information is recorded in the applicationLibraries.xml file in the ~/.IntelliJIdea/config/options directory. Global libraries are similar to project libraries, but are visible for different projects.
    • SDK: Every project uses a SDK. For Java projects, SDK is referred to as JDK. The SDK determines which API library is used to build the project. If a project is multi-module, the project SDK is common for all modules within the project by default. Optionally, a project can configure an individual SDK for each module.
    • Facet: A facet represents a certain configuration, specific for a particular framework/technology associated with a module. A module can have multiple facets. E.g. Spring specific configuration is stored in a Spring facet.


    PSI Files

    The PsiFile class is the common base class for all PSI files, while files in a specific language are usually represented by its subclasses. For example, the PsiJavaFile class represents a Java file. Unlike VirtualFile and Document, which have application scope (even if multiple projects are open, each file is represented by the same VirtualFile instance), PSI has project scope (the same file is represented by multiple PsiFile instances if the file belongs to multiple projects open at the same time).

    • How do I get a PSI file?
      • From an action: e.getData(LangDataKeys.PSI_FILE)
      • From a VirtualFile: PsiManager.getInstance(project).findFile()
      • From a Document: PsiDocumentManager.getInstance(project).getPsiFile()
      • From an element inside the file: psiElement.getContainingFile()
      • To find files with a specific name anywhere in the project, use FilenameIndex.getFilesByName(project, name, scope)
    • What can I do with a PSI file?
      • Most interesting modification operations are performed on the level of individual PSI elements, not files as a whole
      • To iterate over the elements in a file, use psiFile.accept(new PsiRecursiveElementWalkingVisitor()...)
    • Where does a PSI file come from?
      • As PSI is language-dependent, PSI files are created through the Language object, by using the LanguageParserDefinitions.INSTANCE.forLanguage(language).createFile(fileViewProvider) method. Like documents, PSI files are created on demand when the PSI is accessed for a particular file.
    • How long do PSI files persist?
      • Like documents, PSI files are weakly referenced from the corresponding VirtualFile instances and can be garbage-collected if not referenced by anyone.
    • How do I create a PSI file?
      • The PsiFileFactory createFileFromText() method creates an in-memory PSI file with the specified contents. To save the PSI file to disk, use the PsiDirectory add() method.
    • How do I get notified when PSI files change?
      • PsiManager.getInstance(project).addPsiTreeChangeListener() allows you to receive notifications about all changes to the PSI tree of a project.

    File View Providers

    A file view provider (FileViewProvider) manages access to multiple PSI trees within a single file. For example, a JSPX page has a separate PSI tree for the Java code in it (PsiJavaFile), a separate tree for the XML code (XmlFile), and a separate tree for JSP as a whole (JspFile). Each of the PSI trees covers the entire contents of the file, and contains special “outer language elements” in the places where contents in a different language can be found. A FileViewProvider instance corresponds to a single VirtualFile, a single Document, and can be used to retrieve multiple PsiFile instances.

    • How do I get a FileViewProvider?
      • From a VirtualFile: PsiManager.getInstance(project).findViewProvider()
      • From a PsiFile: psiFile.getViewProvider()
    • What can I do with a FileViewProvider?
      • To get the set of all languages for which PSI trees exist in a file: fileViewProvider.getLanguages()
      • To get the PSI tree for a particular language: fileViewProvider.getPsi(language). For example, to get the PSI tree for XML, use fileViewProvider.getPsi(XMLLanguage.INSTANCE)
      • To find an element of a particular language at the specified offset in the file: fileViewProvider.findElementAt(offset, language)
    • How do I extend the FileViewProvider?
      • To create a file type that has multiple interspersing trees for different languages, a plugin must contain an extension to the com.intellij.fileType.fileViewProviderFactory EP
      • Implement FileViewProviderFactory and return your FileViewProvider implementation from createFileViewProvider() method

    PSI Elements

    PSI elements and operations at the level of individual PSI elements are used to explore the internal structure of source code as it is interpreted by the IntelliJ Platform. For example, you can use PSI elements to perform code analysis, such as code inspections or intention actions. The PsiElement class is the common base class for PSI elements. To get a PSI element:

    • From an action: e.getData(LangDataKeys.PSI_ELEMENT). Note: if an editor is currently open and the element under caret is a reference, this will return the result of resolving the reference.
    • From a file by offset: PsiFile.findElementAt(). Note: this returns the lowest level element (“leaf”) at the specified offset, which is normally a lexer token. Most likely you should use PsiTreeUtil.getParentOfType() to find the element you really need.
    • By iterating through a PSI file: using a PsiRecursiveElementWalkingVisitor
    • By resolving a reference: PsiReference.resolve()

    There are three main ways to navigate the PSI: top-down, bottom-up, and using references. In the first scenario, you have a PSI file or another higher-level element (for example, a method), and you need to find all elements that match a specified condition (for example, all variable declarations). In the second scenario, you have a specific point in the PSI tree (for example, the element at caret) and need to find out something about its context (for example, the element in which it has been declared). Finally, references allow you to navigate from the use of an element (e.g., a method call) to the declaration (the method being called) and back.

    Top-down navigation

    The most common way to perform top-down navigation is to use a visitor. To use a visitor, you create a class (usually an anonymous inner class) that extends the base visitor class, override the methods that handle the elements you’re interested in, and pass the visitor instance it to PsiElement.accept(). The base classes for visitors are language-specific. For example, if you need to process elements in a Java file, you extend JavaRecursiveElementVisitor and override the methods corresponding to the Java element types you’re interested in.

    // use of a visitor to find all Java local variable declarations:
    file.accept(new JavaRecursiveElementVisitor() {
      public void visitLocalVariable(PsiLocalVariable variable) {
        System.out.println("Found a variable at offset " + variable.getTextRange().getStartOffset());

    In many cases, you can also use more specific APIs for top-down navigation. For example, if you need to get a list of all methods in a Java class, you can do that using a visitor, but a much easier way to do that is to call PsiClass.getMethods(). PsiTreeUtil contains a number of general-purpose, language-independent functions for PSI tree navigation, some of which (for example, findChildrenOfType()) perform top-down navigation.

    Bottom-up navigation

    The starting point for bottom-up navigation is either a specific element in the PSI tree (for example, the result of resolving a reference), or an offset. If you have an offset, you can find the corresponding PSI element by calling PsiFile.findElementAt(). This method returns the element at the lowest level of the tree (for example, an identifier), and you need to navigate the tree up if you want to determine the broader context.

    In most cases, bottom-up navigation is performed by calling PsiTreeUtil.getParentOfType(). This method goes up the tree until it finds the element of the type you’ve specified. For example, to find the containing method, you call PsiTreeUtil.getParentOfType(element, PsiMethod.class). In some cases, you can also use specific navigation methods. For example, to find the class where a method is contained, you call PsiMethod.getContainingClass()

    PsiFile psiFile = anActionEvent.getData(CommonDataKeys.PSI_FILE);
    PsiElement element = psiFile.findElementAt(offset);
    PsiMethod containingMethod = PsiTreeUtil.getParentOfType(element, PsiMethod.class);
    PsiClass containingClass = containingMethod.getContainingClass();

    PSI References

    A reference in a PSI tree is an object that represents a link from a usage of a certain element in the code to the corresponding declaration. Resolving a reference means locating the declaration to which a specific usage refers. The most common type of references is defined by language semantics. For example, consider a simple Java method:

    public void hello(String message) {

    This simple code fragment contains five references. The references created by the identifiers String, System, out and println can be resolved to the corresponding declarations in the JDK: the String and System classes, the out field and the println method. The reference created by the second occurrence of the message identifier in println(message) can be resolved to the message parameter, declared by String message in the method header.

    Note that String message is not a reference, and cannot be resolved. Instead, it’s a declaration. It does not refer to any name defined elsewhere; instead, it defines a name by itself.

    A reference is an instance of a class implementing the PsiReference interface. Note that references are distinct from PSI elements. You can obtain the references created by a PSI element by calling PsiElement.getReferences(), and can go back from a reference to an element by calling PsiReference.getElement().

    To resolve the reference - to locate the declaration being referenced - you call PsiReference.resolve(). It’s very important to understand the difference between PsiReference.getElement() and PsiReference.resolve(). The former method returns the source of a reference, while the latter returns its target. In the example above, for the message reference, getElement() will return the message identifier on the second line of the snippet, and resolve() will return the message identifier on the first line (inside the parameter list).

    The process of resolving references is distinct from parsing, and is not performed at the same time. Moreover, it is not always successful. If the code currently open in the IDE does not compile, or in other situations, it’s normal for PsiReference.resolve() to return null, and if you work with references, you need to be able to handle that in your code.

    References with Optional or Multiple Resolve Results

    In the simplest case, a reference resolves to a single element, and if the resolve fails, this means that the code is incorrect and the IDE needs to highlight it as an error. However, there are cases when the situation is different. E.g. polyvariant references. Consider the case of a JavaScript program. JavaScript is a dynamically typed language, so the IDE cannot always precisely determine which method is being called at a particular location. To handle this, it provides a reference that can be resolved to multiple possible elements. Such references implement the PsiPolyVariantReference interface.

    For resolving a PsiPolyVariantReference, you call its multiResolve() method. The call returns an array of ResolveResult objects. Each of the objects identifies a PSI element and also specifies whether the result is valid. For example, if you have multiple Java method overloads and a call with arguments not matching any of the overloads, you will get back ResolveResult objects for all of the overloads, and isValidResult() will return false for all of them.

    Searching for References

    As you already know, resolving a reference means going from a usage to the corresponding declaration. To perform the navigation in the opposite direction - from a declaration to its usages - you need to perform a references search. To perform a references search, you use the ReferencesSearch class. To perform a search, you need to specify the element to search for, and optionally other parameters such as the scope in which the reference needs to be searched. You get back a query object that allows you to get all results as an array, or to iterate over the results one by one. If you don’t need to collect all the results, it’s more efficient to use the iteration, because it allows you to stop the processing once you’ve found the element you need.

    Modifying the PSI

    The PSI is a read-write representation of the source code as a tree of elements corresponding to the structure of a source file. You can modify the PSI by adding, replacing, and deleting PSI elements. To perform these operations, you use methods such as PsiElement.add(), PsiElement.delete(), and PsiElement.replace(), as well as other methods defined in the PsiElement interface that let you process multiple elements in a single operation, or to specify the exact location in the tree where an element needs to be added. Just as document operations, PSI modifications need to be wrapped in a write action and in a command (and therefore can only be performed in the EDT).

    Creating the New PSI

    The PSI elements to add to the tree, or to replace existing PSI elements, are normally created from text. In the most general case, you use the createFileFromText() method of PsiFileFactory to create a new file that contains the code construct which you need to add to the tree or to use as a replacement for an existing element, traverse the resulting tree to locate the specific element that you need, and then pass that element to add() or replace().

    When you’re implementing refactorings, intentions or inspection quickfixes that work with existing code, the text that you pass to the various createFromText() methods will combine hard-coded fragments and fragments of code taken from the existing file. For small code fragments (individual identifiers), you can simply append the text from the existing code to the text of the code fragment you’re building. In that case, you need to make sure that the resulting text is syntactically correct, otherwise the createFromText() method will throw an exception.

    For larger code fragments, it’s best to perform the modification in several steps:

    • create a replacement tree fragment from text, leaving placeholders for the user code fragments
    • replace the placeholders with the user code fragments
    • replace the element in the original source file with the replacement tree

    This ensures that the formatting of the user code is preserved and that the modification doesn’t introduce any unwanted whitespace changes. As an example of this approach, see the quickfix in the ComparingReferencesInspection example:

    // binaryExpression holds a PSI expression of the form "x == y", which needs to be replaced with "x.equals(y)"
    PsiBinaryExpression binaryExpression = (PsiBinaryExpression) descriptor.getPsiElement();
    IElementType opSign = binaryExpression.getOperationTokenType();
    PsiExpression lExpr = binaryExpression.getLOperand();
    PsiExpression rExpr = binaryExpression.getROperand();
    // Step 1: Create a replacement fragment from text, with "a" and "b" as placeholders
    PsiElementFactory factory = JavaPsiFacade.getInstance(project).getElementFactory();
    PsiMethodCallExpression equalsCall = (PsiMethodCallExpression) factory.createExpressionFromText("a.equals(b)", null);
    // Step 2: replace "a" and "b" with elements from the original file
    // Step 3: replace a larger element in the original file with the replacement tree
    PsiExpression result = (PsiExpression) binaryExpression.replace(equalsCall);

    Whitespaces and Imports

    When working with PSI modification functions, you should never create individual whitespace nodes (spaces or line breaks) from text. Instead, all whitespace modifications are performed by the formatter, which follows the code style settings selected by the user. Formatting is automatically performed at the end of every command, and if you need, you can also perform it manually using the reformat(PsiElement) method in the CodeStyleManager class.

    Also, when working with Java code (or with code in other languages with a similar import mechanism such as Groovy or Python), you should never create imports manually. Instead, you should insert fully-qualified names into the code you’re generating, and then call the shortenClassReferences() method in the JavaCodeStyleManager (or the equivalent API for the language you’re working with). This ensures that the imports are created according to the user’s code style settings and inserted into the correct place of the file.

    Indexing and PSI Stubs

    The indexing framework provides a quick way to locate certain elements, e.g. files containing a certain word or methods with a particular name, in large code bases. Plugin developers can use the existing indexes built by the IDE itself, as well as build and use their own indexes.

    • File-based indices: built directly over the content of files. Querying a file-based index gets you the set of files matching a certain condition.
    • Stub indices: built over serialized stub trees. A stub tree for a source file is a subset of its PSI tree which contains only externally visible declarations and is serialized in a compact binary format. Querying a stub index gets you the set of matching PSI elements.

    DumbService provides API to query whether the IDE is currently in “dumb” mode (where index access is not allowed). It also provides ways of delaying code execution until indices are ready.

    File-based indexes

    File-based indexes are based on a Map/Reduce architecture. Each index has a certain type of key and a certain type of value. The key is what’s later used to retrieve data from the index. Implementing a file-based index, see com.intellij.uiDesigner.binding.FormClassIndex.

    An implementation of a file-based index consists of the following main parts:

    • getIndexer() returns the indexer class actually responsible for building a set of key/value pairs based on file content.
    • getKeyDescriptor() returns the key descriptor responsible for comparing keys and storing them in a serialized binary format.
      • Probably the most commonly used KeyDescriptor implementation is EnumeratorStringDescriptor which is designed for storing identifiers in an efficient way.
    • getValueExternalizer() returns the value serializer responsible for storing values in a serialized binary format.
    • getInputFilter() allows to restrict the indexing only to a certain set of files.
    • getVersion() returns the version of the index implementation. The index is automatically rebuilt if the current version differs from the version of the index implementation used to build the index.

    Access to file-based indexes is performed through the FileBasedIndex class. The following primary operations are supported:

    • getAllKeys() and processAllKeys() allow to obtain the list of all keys found in files which are a part of the specified project.
    • getValues() allows to obtain all values associated with a specific key but not the files in which they were found.
    • getContainingFiles() allows to obtain all files in which a specific key was encountered.
    • processValues() allows to iterate though all files in which a specific key was encountered and to access the associated values at the same time.

    The IntelliJ Platform contains a number of standard file-based indexes. The most useful indexes for plugin developers are:

    • Word index: the word index should be accessed indirectly by using helper methods of the PsiSearchHelper class
    • File name index FilenameIndex: provides a quick way to find all files matching a certain file name. FileTypeIndex serves a similar goal: it allows to quickly find all files of a certain file type.

    Stub Indexes

    A stub tree is a subset of the PSI tree for a file; it is stored in a compact serialized binary format. The PSI tree for a file can be backed either by the AST (built by parsing the text of the file) or by the stub tree deserialized from disk. Switching between the two is transparent. The stub tree contains only a subset of the nodes. Typically it contains only the nodes that are needed to resolve the declarations contained in this file from external files. Trying to access any node which is not part of the stub tree, or to perform any operation which cannot be satisfied by the stub tree, e.g. accessing the text of a PSI element, causes file parsing and switches the PSI to AST backing.

    Each stub in the stub tree is simply a bean class with no behavior. A stub stores a subset of the state of the corresponding PSI element, like element’s name, modifier flags like public or final, etc. The stub also holds a pointer to its parent in the tree and a list of its children stubs. To support stubs for your custom language, you first need to decide which of the elements of your PSI tree should be stored as stubs. Typically you need to have stubs for things like methods or fields, which are visible from other files. You usually don’t need to have stubs for things like statements or local variables, which are not visible externally.


    Code Inspection and Intentions

    IntelliJ IDEA performs code analysis by applying inspections to your code. Numerous code inspections exist for Java and for the other supported languages. The inspections detect not only compiling errors, but also different code inefficiencies. Whenever you have some unreachable code, unused code, non-localized string, unresolved method, memory leaks or even spelling problems – you’ll find it very quickly.

    External Builder API and Plugins

    External Build Process Workflow

    When the user invokes an action that involves executing an external build (Make, Build Artifacts, etc.), the following steps happen:

    • Before-compile tasks are executed in the IDE process.
    • Some source generation tasks that depend on the PSI (e.g. UI designer form to source compilation) are executed in the IDE process.
    • BuildTargetScopeProvider extensions are called to calculate the scope of the external build (the set of build targets to compile based on the target module to make and the known set of changes).
    • The external build process is spawned (or an existing build process background process is reused).
    • The external build process loads the project model (.idea, .iml files and so on), represented by a JpsModel instance.
    • The full tree of targets to build is calculated, based on the dependencies of each build target to be compiled.
    • For each target, the set of builders capable of building this target is calculated.
    • For every target and every builder, the build() method is called. This can happen in parallel if the “Compile independent modules in parallel” option is enabled in the settings. For module-level builders, the order of invoking builders for a single target is determined by their category; for other builders, the order is undefined.
    • Caches to record the state of the compilation are saved.
    • Compilation messages reported through the CompileContext API are transmitted to the IDE process and displayed in the UI (in the Messages view).
    • Post-compile tasks are executed in the IDE process.

    Incremental Build

    To support incremental build, the build process uses a number of caches which are persisted between build invocations. Even if your compiler doesn’t support incremental build, you still need to report correct information so that incremental build works correctly for other compilers.

    • SourceToOutputMapping is a many-to-many relationship between source files and output files (“which source files were used to produce the specified output file”). It’s filled by calls to BuildOutputConsumer.registerOutputFile() and ModuleLevelBuilder.OutputConsumer.registerOutputFile()
    • Timestamps records the timestamp of each source file when it was compiled by each build target. (If the compilation was invoked multiple times with different scopes and the file was changed in the meantime, the last compiled timestamps for different build targets may vary.) It’s updated automatically by JPS.

    The IDE monitors the changes of the project content and uses the information from those caches to generate the set of dirty and deleted files for every compilation. (Dirty files need to be recompiled, and deleted files need to have their output deleted). A builder can also report additional files as dirty (e.g. if a method is deleted, the builder can report the classes using this method as dirty.) A module-level builder can add some files to the dirty scope; if this happens, and if the builder returns ADDITIONAL_PASS_REQUIRED from its build() method, another round of builder execution for the same module chunk will be started with the new dirty scope.

    A builder may also want to have its custom caches to store additional information to support partial recompilation of a target (e.g. the dependencies between Java files in a module). To store this data, you can either store arbitrary files in the directory returned from BuildDataManager.getDataPaths().getTargetDataRoot() or use a higher-level API: BuildDataManager.getStorage()

    To pass custom data between the invocation of the same builder between multiple targets, you can use CompileContext.getUserData() and CompileContext.putUserData().

    Testing Plugins

    Most of the tests in the IntelliJ Platform codebase are model level functional tests. What this means is the following:

    • The tests run in a headless environment that uses real production implementations for the majority of components, except for a number of UI components.
    • The tests usually test a feature as a whole, rather than individual functions that comprise its implementation.
    • The tests do not test the Swing UI and work directly with the underlying model instead.
    • Most of the tests take a source file or a set of source files as input data, execute a feature, and then compare the output with expected results. Results can be specified as another set of source files, as special markup in the input file, or directly in the test code.

    Another consequence of our testing approach is what our test framework does not provide:

    • We do not provide a recommended approach to mocking. We have a few tests in our codebase that use JMock, but in general, we find it difficult to mock all of the interactions with IntelliJ Platform components that your plugin class will need to have, and we recommend working with real components instead.
    • We do not provide a general-purpose framework for Swing UI testing. You can try using tools such as FEST or Sikuli for plugin UI testing, but we don’t use either of them and cannot provide any guidelines for their use. Internally, we use manual testing for testing our Swing UIs. Please do not use platform/testGuiFramework, it is reserved for internal use.

    Tests and Fixtures

    The IntelliJ Platform testing infrastructure is not tied to any specific test framework. In fact, the IntelliJ IDEA Team uses JUnit, TestNG and Cucumber for testing different parts of the project. However, most of the tests are written using JUnit 3.

    • When writing your tests, you have the choice between using a standard base class to perform the test set up for you and using a fixture class, which lets you perform the setup manually and does not tie you to a specific test framework.
    • With the former approach, you can use classes such as LightPlatformCodeInsightFixtureTestCase
    • With the latter approach, you use the IdeaTestFixtureFactory class to create instances of fixtures for the test environment, and you need to call the fixture creation and setup methods from the test setup method used by your test framework.

    Light and Heavy Tests

    Plugin tests run in a real, rather than mocked, IntelliJ Platform environment and use real implementations for most of the application and project components/services. Loading and initializing all the project components and services for a project to run tests is a quite expensive operation, and we want to avoid doing it for each test. Dependently on the loading and execution time, we make a difference between heavy tests and light tests available in IntelliJ Platform test framework:

    Light Tests

    Light tests reuse a project from the previous test run when possible. Extend the following classes:

    • LightPlatformCodeInsightFixtureTestCase for tests that don’t have any Java dependencies.
    • LightCodeInsightFixtureTestCase for tests that require the Java PSI or any related functionality.

    When writing a light test, you can specify the requirements for the project that you need to have in your test, such as the module type, the configured SDK, facets, libraries, etc. You do so by extending the LightProjectDescriptor class and returning your project descriptor from getProjectDescriptor(). Before executing each test, the project will be reused if the test case returns the same project descriptor as the previous one, or recreated if the descriptor is different.

    Heavy Tests

    Heavy tests create a new project for each test. The setup code for a multi-module Java project looks something like that:

    final TestFixtureBuilder<IdeaProjectTestFixture> projectBuilder = IdeaTestFixtureFactory.getFixtureFactory().createFixtureBuilder(getName());
    // Repeat the following line for each module
    final JavaModuleFixtureBuilder moduleFixtureBuilder = projectBuilder.addModule(JavaModuleFixtureBuilder.class);
    myFixture = JavaTestFixtureFactory.getFixtureFactory().createCodeInsightFixture(projectBuilder.getFixture());

    Test Project and Testdata Directories

    The test fixture creates a test project environment. Unless you customize the project creation, the test project will have one module with one source root called src. The files for the test project exist either in a temporary directory or in an in-memory file system, depending on which implementation of TempDirTestFixture is used.

    LightPlatformCodeInsightFixtureTestCase uses an in-memory implementation; if you set up the test environment by calling IdeaTestFixtureFactory.createCodeInsightFixture(), you can specify the implementation to use.

    In your plugin, you normally store the test data for your tests (such as files on which plugin features will be executed and expected output files) in the testdata directory. This is just a directory under the content root of your plugin, but not under a source root. Files in testdata usually are not valid source code and must not be compiled.

    • To specify the location of testdata, you must override the getTestDataPath() method. The default implementation assumes running as part of the IntelliJ Platform source tree and is not appropriate for third-party plugins.
    • To copy files or directories from your testdata directory to the test project directory, you can use the copyFileToProject() and copyDirectoryToProject() methods in the CodeInsightTestFixture class.

    Most operations in plugin tests require a file open in the in-memory editor, in which highlighting, completion and other operations will be performed. The in-memory editor instance is returned by CodeInsightTestFixture.getEditor(). To copy a file from the testdata directory to the test project directory and immediately open it in the editor, you can use the CodeInsightTestFixture.configureByFile() or configureByFiles() methods. The latter copies multiple files to the test project directory and opens the first of them in the in-memory editor.

    Alternatively, you can use one of the other methods which take parameters annotated with @TestDataFile. These methods copy the specified files from the testdata directory to the test project directory, open the first of the specified files in the in-memory editor, and then perform the requested operation such as highlighting or code completion.

    Writing Tests

    In most cases, once you have the necessary files copied to the test project and loaded into the in-memory editor, writing the test itself involves invoking your plugin code and has few dependencies on the test framework. However, for many common cases, the framework provides helper methods that can make testing easier:

    • type() simulates the typing of a character or string into the in-memory editor.
    • performEditorAction() simulates the execution of an action in the context of the in-memory editor.
    • complete() simulates the invocation of code completion and returns the list of lookup elements displayed in the completion list (or null if the completion had no suggestions or one suggestion which was auto-inserted).
    • findUsages() simulates the invocation of “Find Usages” and returns the found usages.
    • findSingleIntention() in combination with launchAction() simulate the invocation of an intention action or inspection quickfix with the specified name.
    • renameElementAtCaret() or rename() simulate the execution of a rename refactoring.

    To compare the results of executing the action with the expected results, you can use the checkResultByFile() method. The file with the expected results can also contain markup to specify the expected caret position or selected text range. If you’re testing an action that modifies multiple files (a project-wide refactoring, for example), you can compare an entire directory under the test project with the expected output using PlatformTestUtil.assertDirectoriesEqual().

    Custom Language

    Implementing Lexer

    The lexer, or lexical analyzer, defines how the contents of a file is broken into tokens. The lexer serves as a foundation for nearly all of the features of custom language plugins, from basic syntax highlighting to advanced code analysis features. The IDE invokes the lexer in three main contexts, and the plugin can provide different lexer implementations for these contexts:

    • Syntax highlighting: The lexer is returned from the implementation of the SyntaxHighlighterFactory interface which is registered in the com.intellij.lang.syntaxHighlighterFactory extension point.
    • Building the syntax tree of a file: the lexer is expected to be returned from ParserDefinition.createLexer(), and the ParserDefinition interface is registered in the com.intellij.lang.parserDefinition extension point.
    • Building the index of the words contained in the file: if the lexer-based words scanner implementation is used, the lexer is passed to the DefaultWordsScanner constructor.

    The easiest way to create a lexer for a custom language plugin is to use JFlex. Classes FlexLexer and FlexAdapter adapt JFlex lexers to the IntelliJ Platform Lexer API. For developing lexers using JFlex, the GrammarKit plugin can be useful. It provides syntax highlighting and other useful features for editing JFlex files.

    Types of tokens for lexers are defined by instances of IElementType. A number of token types common for all languages are defined in the TokenType interface. Custom language plugins should reuse these token types wherever applicable.

    Implementing a Parser and PSI

    Parsing files in IntelliJ Platform is a two-step process. First, an AST is built, defining the structure of the program. AST nodes are created internally by the IDE and are represented by instances of the ASTNode class. Each AST node has an associated element type IElementType instance, and the element types are defined by the language plugin. The top-level node of the AST tree for a file needs to have a special element type, implementing the IFileElementType interface.

    The AST nodes have a direct mapping to text ranges in the underlying document. The bottom-most nodes of the AST match individual tokens returned by the lexer, and higher level nodes match multiple-token fragments. Operations performed on nodes of the AST tree, such as inserting / removing / reordering nodes and so on, are immediately reflected as changes to the text of the underlying document.

    Second a PSI tree is built on top of the AST, adding semantics and methods for manipulating specific language constructs. Nodes of the PSI tree are represented by classes implementing the PsiElement interface and are created by the language plugin in the ParserDefinition.createElement() method. The top-level node of the PSI tree for a file needs to implement the PsiFile interface, and is created in the ParserDefinition.createFile() method.

    Generating parser and corresponding PSI classes from grammars using Grammar-Kit plugin. Besides code generation, it provides various features for editing grammar files: syntax highlighting, quick navigation, refactorings, and more.

    The language plugin provides the parser implementation as an implementation of the PsiParser interface, returned from ParserDefinition.createParser(). The parser receives an instance of the PsiBuilder class, which is used to get the stream of tokens from the lexer and to hold the intermediate state of the AST being built. The parser must process all tokens returned by the lexer up to the end of stream, in other words until PsiBuilder.getTokenType() returns null, even if the tokens are not valid according to the language syntax.

    The parser works by setting pairs of markers (PsiBuilder.Marker instances) within the stream of tokens received from the lexer. Each pair of markers defines the range of lexer tokens for a single node in the AST tree. If a pair of markers is nested in another pair (starts after its start and ends before its end), it becomes the child node of the outer pair.

    The element type for the marker pair and for the AST node created from it is specified when the end marker is set, which is done by making call to PsiBuilder.Marker.done(). Also, it is possible to drop a start marker before its end marker has been set. The drop() method drops only a single start marker without affecting any markers added after it, and the rollbackTo() method drops the start marker and all markers added after it and reverts the lexer position to the start marker. These methods can be used to implement lookahead when parsing.

    The method PsiBuilder.Marker.precede() is useful for right-to-left parsing when you don’t know how many markers you need at a certain position until you read more input.

    An important feature of PsiBuilder is its handling of whitespace and comments. The types of tokens which are treated as whitespace or comments are defined by the methods getWhitespaceTokens() and getCommentTokens() in the ParserDefinition class. PsiBuilder automatically omits whitespace and comment tokens from the stream of tokens it passes to PsiParser, and adjusts the token ranges of AST nodes so that leading and trailing whitespace tokens are not included in the node.

    Syntax Highlighting and Error Highlighting

    The syntax and error highlighting is performed on multiple levels: Lexer, Parser and (External) Annotator.


    The first level of syntax highlighting is based on the lexer output, and is provided through the SyntaxHighlighter interface. The syntax highlighter returns the TextAttributesKey instances for each token type which needs special highlighting. For highlighting lexer errors, the standard TextAttributesKey for bad characters HighlighterColors.BAD_CHARACTER can be used.


    If a particular sequence of tokens is invalid according to the grammar of the language, the PsiBuilder.error() method can be used to highlight the invalid tokens display an error message.


    A plugin can register one or more annotators in the com.intellij.annotator EP, and these annotators are called during the background highlighting pass to process the elements in the PSI tree of the custom language. Annotators can analyze not only the syntax, but also the semantics using PSI, and thus can provide much more complex syntax and error highlighting logic. The annotator can also provide quick fixes. When the file is changed, the annotator is called incrementally to process only changed elements in the PSI tree.

    To highlight a region of text as a warning or error, the annotator calls createErrorAnnotation() or createWarningAnnotation() on the AnnotationHolder object passed to it, and optionally calls registerFix() on the returned Annotation object to add a quick fix for the error or warning.

    External tool

    If the custom language employs external tools for validating files in the language, it can provide an implementation of the ExternalAnnotator interface and register it in com.intellij.externalAnnotator EP. The ExternalAnnotator highlighting has the lowest priority and is invoked only after all other background processing has completed. It uses the same AnnotationHolder interface for converting the output of the external tool into editor highlighting.

    References and Resolve

    The implementation of resolve based on the standard helper classes contains of the following components:

    • A class implementing the PsiScopeProcessor interface which gathers the possible declarations for the reference and stops the resolve process when it has successfully completed. The main method which needs to be implemented is execute(), which is called to process every declaration encountered during the resolve, and returns true if the resolve needs to be continued or false if the declaration has been found. The methods getHint() and handleEvent() are used for internal optimizations and can be left empty in the PsiScopeProcessor implementations for custom languages.
    • A function which walks the PSI tree up from the reference location until the resolve has successfully completed or until the end of the resolve scope has been reached. If the target of the reference is located in a different file, the file can be located, for example, using FilenameIndex.getFilesByName() (if the file name is known) or by iterating through all custom language files in the project (iterateContent() in the FileIndex interface obtained from ProjectRootManager.getFileIndex() ).
    • The individual PSI elements, on which the processDeclarations() method is called during the PSI tree walk. If a PSI element is a declaration, it passes itself to the execute() method of the PsiScopeProcessor passed to it. Also, if necessary according to the language scoping rules, a PSI element can pass the PsiScopeProcessor to its child elements.

    Code Completion

    Reference Completion

    Reference completion is easier to implement, but supports only the basic completion action.

    To fill the completion list, the IDE calls PsiReference.getVariants() either on the reference at the caret location or on a dummy reference that would be placed at the caret. This method needs to return an array of objects containing either strings, PsiElement instances or instances of the LookupElement class. If a PsiElement instance is returned in the array, the completion list shows the icon for the element.

    The most common way to implement getVariants() is to use the same function for walking up the tree as in PsiReference.resolve(), and a different implementation of PsiScopeProcessor which collects all declarations passed to its processDeclarations() method and returns them as an array for filling the completion list.

    Contributor-based Completion

    Contributor-based completion provides more features, supports all three completion types (basic, smart and class name) and can be used, for example, to implement keyword completion.

    Implementing the CompletionContributor interface gives you the greatest control over the operation of code completion for your language.

    Lookup Items

    Items shown in the completion list are represented by instances of the LookupElement interface. These instances are normally created through the LookupElementBuilder class.

    For every lookup element, you can specify the following attributes:

    • Text. Shown left-aligned.
    • Tail text. Shown next to the main item text, is not used for prefix matching, and can be used, for example, to show the parameter list of the method.
    • Type text. Shown right-aligned in the lookup list and can be used to show the return type or containing class of a method, for example.
    • Icon
    • Text attributes. Bold, Strikeout, etc.
    • Insert handler. The insert handler is a callback which is called when the item is selected, and can be used to perform additional modifications of the text (for example, to put in the parentheses for a method call)

    Find Usages

    The steps of the Find Usages action are the following:

    • Before the Find Usages action can be invoked, the IDE builds an index of words present in every file in the custom language. Using the WordsScanner implementation returned from FindUsagesProvider.getWordsScanner(), the contents of every file are loaded and passes it to the words scanner, along with the words consumer. The words scanner breaks the text into words, defines the context for each word (code, comments or literals) and passes the word to the consumer. The simplest way to implement the words scanner is to use the DefaultWordsScanner implementation, passing to it the sets of lexer token types which are treated as identifiers, literals and comments. The default words scanner will use the lexer to break the text into tokens, and will handle breaking the text of comment and literal tokens into individual words.
    • When the user invokes the Find Usages action, the IDE locates the PSI element the references to which will be searched. The PSI element at the cursor (the direct tree parent of the token at the cursor position) must be either a PsiNamedElement or a PsiReference which resolves to a PsiNamedElement. The word cache will be used to search for the text returned from the PsiNamedElement.getName() method. Also, if the text range of the PsiNamedElement includes some other text besides the identifier returned from getName() (for example, if the PsiNamedElement represents a JavaScript function and its text range includes the “function” keyword in addition to the name of the function), the method getTextOffset() must be overridden for the PsiNamedElement, and must return the start offset of the name identifier within the text range of the element.
    • Once the element is located, the IDE calls FindUsagesProvider.canFindUsagesFor() to ask the plugin if the Find Usages action is applicable to the specific element.
    • When showing the Find Usages dialog to the user, FindUsagesProvider.getType() and FindUsagesProvider.getDescriptiveName() are called to determine how the element should be presented to the user.
    • For every file containing the searched words, the IDE builds the PSI tree and recursively descends that tree. The text of each element is broken into words and then scanned. If the element was indexed as an identifier, every word is checked to be a PsiReference resolving to the element the usages of which are searched. If the element was indexed as a comment or literal and the search in comments or literals is enabled, it checks if the word is equal to the name of the searched element.
    • After the usages are collected, results are shown in the usages pane. The text shown for each found element is taken from the FindUsagesProvider.getNodeText() method.

    Renaming Refactoring

    Safe Delete Refactoring

    Code Formatter

    Code Inspections and Intentions

    Structure View

    Surround With

    Go to Class and Go to Symbol