Thursday, September 24, 2020

Packaging the Extension


I am not, at the moment, interested in distributing an extension through the marketplace, so the final step is to package the extension up so that I can include it in my "regular" version of VSCode (and share it with anybody else who is interested).

Everything here is based on my reading of  the Microsoft documentation.

VSCE

All of the tasks around publishing seem to depend on a tool called vsce. This is obtained from Microsoft via npm as follows:
npm i -g vsce
By installing this globally, it can be run from anywhere.

Packaging the extension

From the package root (in my case vscode-java), simply run
vsce package
This produces a vsix file.

Installing the extension

From within VSCode, it is possible to select the Extensions tab on the sidebar (on the left hand side) and from the drop down menu it is possible to select Install from VSIX…. When you select this, a file chooser comes up and you can find the appropriate .vsix file and select it. This installs the extension in the current VSCode.

You can remove the extension by selecting the "settings" icon shown by the plugin (the gear icon) and choosing "Uninstall". It may be necessary to click on "Reload Required" to complete the uninstallation.

Handling the Java Executable

That, of course, would all be so easy if it weren't for the Java executable. This is going to "move" in the process of bundling the executable and so the code that locates it needs to be able to distinguish between the "development" and "release" cases.

This has two facets: first off, we need to copy it from its current build directory (in lsp-java) to the extension directory (vscode-java); then we need the extension.ts to look in both places and choose the development one if available, else the production version. All of this ends up being sufficiently complicated that I created a script, package.sh, to do all the packaging.

Conclusion

Packaging and installing an extension for VSCode was pleasantly easy. In fact, the whole process of dealing with extensions has been easier than I was expecting, and now I feel ready to tackle this in the real world.

LSP Navigation and Completion


The final things I want to do before moving on are to navigate to elements and to complete them. This demonstrates that we have at least a basic understanding of the program we are analyzing.

Building up Knowledge

The first thing to do is to build a repository of definitions encountered by the compiler.

In a real compiler, this is obviously the main purpose of parsing, but having ignored that up to now for the purposes of this exercise, I need to add building a repository to the SimpleParser. For simplicity, I am just going to capture the definitions of cards and contracts.

In order to be ready before users open files, we need to parse all the files in the workspace during initialization and store all the definitions in the repository. I want to make a few points here before moving on.
  • In the real world, this calls for quite a cunning data structure, in that it needs to be able to refreshed quite frequently (every time we receive a didChange event) and incrementally (removing all the definitions from the damaged areas) without leaving any dangling links (if we allow references) and (for the purposes of completion) being able to find all the tokens matching some typed text (possibly in some preference order such as usage); for this demo, I am just going to keep it small, simple and brute force.
  • The IgnorantLanguageServer class now seems to be growing too big and have too many responsibilities: if this were a blog about clean code, I would have a post or two about how I broke this up and found the ideal pattern that describes it; but for now, I am just going to keep putting more miscellaneous material in here as long as it comes under the very general heading of "coordination".
As soon as I started to do this, I realized that the protocol I was using on the server side was different to the client side … of course my dependencies are out of date. I upgraded to version 0.9.0 of the lsp4j library and adjusted to cope with the fallout.

In line with the general air of insouciance which pervades this article - if not indeed this entire blog - I am not going to parse the files at all carefully, but just go after what I think should be there, being just careful enough not to cause any fatal exceptions. The important thing is to capture, for each interesting definition, its name and its location.

Because this is an incremental, repeated process, we need to clear out the repository every time before we parse a file. We do this by scanning through the repository looking at each entry in turn. Because we are parsing an entire file at a time, we only need to match the entry's URI although if we were interested in handling smaller changes we could also look at line numbers (and even possibly character positions) and clear out at that level. As noted above, our repository is just storing all the items in a TreeSet, so we just brute-force our way through the entries and remove the ones with a matching URI.

Go to Definition

In order to handle the "Go to Definition" functionality, we need to inform the client that we are able to do this.

For some reason, this requires two steps. First, we must add the capability in the initialize method of the IgnorantLanguageServer, and then we must implement a new method (declaration) in the TextDocumentService. I have to say I don't like this approach. Maybe it's just because I haven't caught up with default methods in Java interfaces, but it seems to me that a better pattern would be to say in initialize "here is an implementation of a (one function) interface that supports declaration".

Anyway, the intent of the declaration method is to find all the locations where there might be a declaration of a symbol. Not, of course, that you get the symbol per se - you get the location of the cursor when the user asks for the definition. One approach would be to have the parser record all instances of all symbols and then have a location to location table; while I think there is a lot to be said for this (for example, it would make Find Usages easy to implement) it seems to be more work than I want to do right now. So instead, I'm going to go to the relevant file and line and try and identify the token around that location.

This feels like a responsibility for the parser, so I'll put the code there. I don't like allowing abstraction bleed, but I'm going to pass across an LSP Position because I don't at the moment have my own position abstraction as such (I just stored the primitive line and character position in name) so if I were going to take this further I would need to refactor (my real compiler, of course, does have a very clear, and quite rich, position abstraction). Because this method does not come with any file text, I also had to store the text of all files during parsing so that I can read it back now (it would of course be possible to read them from the disk).

Having found the token (an exercise in string manipulation), we then ask the repository to hand back a list of locations where it is defined. While (logically) that might be only one, in a "live" compiler where users can type anything, it is not unreasonable to suppose that there might be multiple definitions; or indeed, that ambiguity might exist as to which of several matching definitions might be intended. The repository scans through all its definitions and sees which match. Again, for convenience, I have let the LSP abstractions bleed through into the repository; obviously the code in the ParsingTextDocumentService should adapt these.

And hey presto, we are done. If you click on the uses of HitMe or Multiplier in the contract.st file, and select "Go to Definition" from the popup menu, the appropriate definition in contract.fl will be selected.

Completion

Completion works by you typing some characters and the editor offering you a selection of possible completions. Obviously, this list should be ordered in a way which makes the more likely selections come first and contains possible and partial matches as well as the "obvious" ones. I'm not going to do any of that. I'm just going to offer completions for which what the user has typed is a proper prefix. Yes, it is because I'm lazy - but my justification is that all of that hard work is not relevant to the integration.

As before, the first thing is to announce that we can do this. This again comes in two parts: we need to specify the capability and then we need to implement the completion method. In this case, however, the capability is more than just a boolean value, it wants a boolean saying if it is a "resolve provider" and a list of "trigger characters". It is not entirely clear what it means to be a "resolve provider" but I think it is reasonable to say "no" for now; googling around, the idea behind "trigger characters" seems to be that not everything you type in the editor will force it to the expense of a round trip to the server to obtain completions. It seems by default, alphanumeric characters will; if your language wants to extend this, it is possible to specify "trigger characters" that will force completion logic (think < in HTML).

The implementation is much the same as before: try and figure out the token and then ask the repository to complete it. Again, I let the LSP abstractions bleed over into the repository code.

And just like that, it works!

Resolving Completions

It would seem that there is a lot more  information that can be communicated about completions than just the text to fill in. Some of this (such as the icon to display) probably needs to be returned at the same time as the original completion item to be useful; but other data (such as how to insert it, etc) is only relevant if you choose to do the insert.

Given that collecting and transmitting this data is probably expensive, I think the logic is that the completion() method returns a list of all the candidates and the resolve() method asks for detailed information about only the selected candidate.

I did a brief - and not very interesting - foray into this area to finish up this work.

Conclusion

Wiring up the LSP services does not seem that hard. It seems to be mainly a question of implementing the correct parsing and repository operations on the back end and adapting between the expectations of the LSP protocol and how you have your information stored. In a real system, a sophisticated repository is essential to good performance.

Connecting VSCode to Java


The next step in the process of integrating my compiler with VSCode is to get the LanguageClient inside VSCode talking to an LSP server running inside a Java process.

I was pinning my hopes on deriving this code from what  Adam Voss had done. Sadly, I could not reverse engineer what he had done on the client side, so I started trying to research for myself. Sadly, although there appears to be good "explanatory" documentation for VSCode, it doesn't seem like there is very much in the way of "reference" documentation, so I ended up looking  at the code.

Now, I'm not a typescript expert, but going through this, it seems that what I really want to do is to provide a hash for the "server options" containing the field command in it. OK, I can do that. Everything else I'm going to liberally borrow from the lsp-sample from Microsoft.

Let's get coding

Another problem rears its head at this point. I have one project; lsp-sample seems to have two. Logically, it has a client and a server. I want to copy the client and do my own server (in Java). But the top level directory also has a package.json and the relevant items seem to be spread across the two. I don't really understand how this works, but I will just steal what I need to go in my package.json and hope for the best.

First off, I need the dependency on language-client:
"dependencies": {
  "vscode-languageclient": "^6.1.3"
},
Then I need to define activationEvents, which if I understand it, is the way in which you tell VSCode that your extension is willing to take on a particular file (possibly along with other situations).

So we declare two activation events (one for each of the two languages declared in package.json) which notice when an editor is opened which meets their criteria for editing.
"activationEvents": [
  "onLanguage:flas",
  "onLanguage:flas-st"
],
When I tried this, it didn't work and I received an error message that
properties `activationEvents` and `main` must both be specified or must both be omitted
I didn't see anything about this in the documentation, but by reference to the sample, it would seem that you have to specify a value for main in package.json, pointing to where the extension.ts file is found.
"main": "out/extension.js",
This may not immediately appear to be where extension.ts will be found, and it's not. Because in the real world node uses JavaScript, the "out/" is required because tsc is generating the JavaScript file (also note the .js extension here).

Defining the Extension

So that's the configuration. But what goes in extension.ts? I'm not going to reproduce it all here, but it is complicated enough - and took me long enough to figure out - that I think it's worth digging into a little bit.

Working backwards, we need to create and start a LanguageClient:
// Create the language client and start the client.
client = new LanguageClient(
  'IgnorancePlugin',
  'Plugin through Ignorance',
  serverOptions,
  clientOptions
);
        
// Start the client. This will also launch the server
client.start();
The four fields here are:
  • the id of the plugin which will come up from time to time later;
  • the title of the plugin which is displayed in the output and extensions windows in VSCode;
  • the options about how to run the LSP server;
  • the options about how the client is configured.
The client options seem fairly easy, although I have to say that I didn't dig too far into what all the possibilities were.
// Options to control the language client
let clientOptions: LanguageClientOptions = {
  // Register for our languages
  documentSelector: [
    { scheme: 'file', language: 'flas' },
    { scheme: 'file', language: 'flas-st' }
  ]
};
This seems to me somewhat duplicative of what we configured in the package.json but it may not be.

Finally, we have the server options, which, as noted above, can come in one of several varieties. To specify an external server which needs to be launched each time, the server options need to be an object with a command specified. args, env and cwd may also be specified. In the absence of cwd the current workspace root is used.

Thus I end up with these options:
let serverOptions: ServerOptions = {
  command: "java",
  args: [
    "-jar",
    path.resolve(context.extensionPath, '..', 'lsp-java', 'build', 'libs', 'lsp-java-all.jar')
  ]
};
Here, context.extensionPath is the path where the extension is found. Because I know that the Java binary is going to be found relative to the extension, I can specify this here. I'm not sure what happens when you come to package the extension for distribution, but that's a topic for another day.

Oh, and don't think that I figured all this out a priori. I spent a lot of time running sample scripts that were outputting relevant information and causing errors in VSCode to find out all the information I needed.

The Java Server

On the server side, I simply repackaged the Adam Voss server, putting it into my own package (under the lsp-java directory), and made it operate over standard input/output rather than using a socket.

So now it works, right? How do we know?

It appears that there is a mechanism to  view the communication between client and server, but how easy is that to actually do? Actually, not too hard.

First off, you need a block like this in the contributes hash of package.json.
"configuration": {
  "type": "object",
  "title": "Ignorance Settings",
  "properties": {
    "IgnorancePlugin.trace.server": {
    "scope": "window",
    "type": "string",
    "enum": [
      "off",
        "messages",
        "verbose"
      ],
      "default": "off",
      "description": "Traces the communication between VS Code and the language server."
    }
  }
}
Overall, the "configuration" block defines all the settings that your extension has. You can define anything (within reason) here and access it from both the client and the server.

The settings block is automatically configured as a properties "window" inside Settings. If you go to Code > Settings and then select Extensions, you will see a sub-block called Ignorance Settings (the name comes from the title above).

The property defined here is interpreted directly by VSCode as defining the trace level of the communication. This works by knowing exactly the plugin name (the id of the plugin as specified in the LanguageClient constructor in extension.ts) followed by .trace.server. From the settings window, it is possible to change the value to messages or verbose and see the communication between client and server. This is obviously vital for understanding what is going on.

Once turned on, you can quit the Extension window and restart it. As you do so, you should see the messages appear in the Output window. If you can't see the Output window anywhere, you can make it pop up by selecting View>Output from the main menu.

When I do this and restart with contract.st open, I see three messages sent across to the server: initialize - (0), initialized and textDocument/didOpen. It's a lot of output, so I'm not going to reproduce it all here, but a number of fields are interesting.

In initialize, a rootPath and a rootUri are passed across, which appear to be the location of the workspace folder. The response contains information about the features that are implemented - presumably derived from the code I have copied across to get started.

The initialized message is empty.

The textDocument/didOpen message contains a full uri, the languageId which VSCode has identified it having, a version number (which is updated every time you make a change - i.e. type anything), and the full text of the original document.

Every time the document changes, a textDocument/didChange message is sent across: the textDocument element describes the uri and updated version of the document, and the contentChanges contains the full text of the document. I believe that advanced usage exists to say that you only want to see some sub-ranges of the changes, but for me right now, getting the whole document every time feels like a win. In fact, it would seem that this is a setting configured during the initialization step (see IgnorantLanguageServer.java).

Wiring up a "Real" Compiler

In my first server-side check in, I simply accepted the code as I found it, but it didn't really do what I wanted. Here I am going to rework this code and, as I do so, describe how it now works.

Now, I don't want to wire up my full compiler at the moment (well, actually I do, but here is not the place and now is not the time). But I definitely want to check out how to deal with errors and report messages if something has gone wrong. So I am going to write a very simple compiler to handle the flas and flas-st languages without dealing with all their complexity.

FLAS is an indentation-based language, so I'm going to start by saying that we can have two top-level elements in FLAS files, contract and card (mainly because that is what is in the sample I have here). Any other keyword (for example error) is going to be an error and a message needs to come back to that effect. Lines that are not indented are considered comments; lines that are indented more than one tab stop are ignored by this simple parser.

The main function is in LSPServer. This simply creates an IgnorantLanguageServer and creates an LSP server using this and the combination of standard input and output.

The top level of the LSP server is IgnorantLanguageServer. It is responsible for setting up the connection and wiring things together. The initialize method is the first method called from the client side and passes in the expectations from the client side; the response is the set of capabilities that the server is prepared to offer.

The connect method is called from LSPServer when the client connects and enables the server to respond.

Because we want to implement the text document service, we do that and return a wrapper around our simple parser, the ParsingTextDocumentService. While there are many methods this could implement, we are basically just interested in the client opening or changing documents. Every time this happens, we parse the document using our SimpleParser; in the process of parsing, it sends back any errors it encounters.

Having done all of that, restarting the extension window of VSCode produces messages about the errors in our code. Great!

Conclusion

In this episode, we successfully wired up a Java based back end for the LSP and enabled it to read and parse documents sent from the editor. In doing so, we were able to send back errors as we came across them.

That's most of what I want to do. There are just two more things I want to try in this fake universe - can I navigate to a definition from a reference and can I complete a typename?

Syntax Highlighting in VSCode

So, time to start writing some code. Or, at least copying it.

I created a new directory in the ignorance repository, called vscode-java. This is where I'm going to put the VSCode half of the language server - the client if you will. As trailed in the last post, my starting point is going to be copying the contentprovider sample and simplifying it. So that's the code that I copied.

And then I went through "simplifying" it - i.e. deleting most of the actual code so that I was just left with the syntax highlighting portion. I then copied in a couple of sample text files from my own repository. And obviously I had to run npm install and open it in VSCode.

How Syntax Highlighting Works

The instructions on  configuring syntax highlighting in the Microsoft documentation are actually quite clear, but not very exhaustive. Mainly it seems to defer most of the details to "it's the same as TextMate" without referencing anything.

The official word on TextMate grammars appears to be  this document, but it's not very detailed itself. I haven't managed (yet) to find any more introductory work.

The key thing seems to be to configure language and grammar contributions in package.json, so I did this:
"contributes": {
  "languages": [
    {
      "id": "flas",
      "extensions": [ ".fl" ]
    },
    {
      "id": "flas-st",
      "extensions": [ ".st" ]
    }
  ],
  "grammars": [
    {
      "language": "flas",
      "path": "./syntax/flas.json",
      "scopeName": "source.flas"
    },
    {
      "language": "flas-st",
      "path": "./syntax/flas-st.json",
      "scopeName": "source.flas-st"
    }
  ]
}
Here I appear to be defining two languages, but that's just because I have two types of file for my language: the main files have extension .fl and the system tests have .st. Each of these has its own grammar. The grammars are placed in files under the syntax directory and each has its own scope name for theming purposes.

Defining the Grammars

The grammars are defined in JSON approximately in line with the description of "TextMate grammars" insofar as I understand it (not a lot as yet). I'm sure it will become clearer as I dig in more. Sadly, the syntax is sufficiently opaque as to discourage you from learning by example.

However, this is an excerpt of one of the grammars I defined.
{
  "name": "flas",
  "scopeName": "source.flas",
  "patterns": [
    { "include": "#comment" },
    { "include": "#contract-intro"}
  ],
  "repository" : {
    "comment": {
      "name": "comment.line.file",
      "match": "^[^ \t].*$"
    },
    "contract-intro": {
      "begin": "\tcontract\\b",
      "beginCaptures": {
        "0": { "name": "keyword.intro" }
      },
      "end": "(//.*$|$)",
      "name": "statement.contract",
      "patterns": [{"include":"#typename"}]
    }
  }
}
The name and scopeName match the language and scopeName from the package.json. Failure to match both invalidates the grammar and it will not be used for syntax highlighting. The patterns array defines a set of productions or rules that can occur at the top level. In spite of being defined by regular expressions, there is an element of grammar productions to this, and it is certainly NOT the case that each regular expression just matches what it feels like.

The repository allows you to define more complex, (possibly recursively) nested rules. The #include syntax says that instead of specifying a specific pattern it is possible to delegate to a rule in the repository. The reference to the rule name must begin with a #, while the rule name itself does not; I'm not sure why. The patterns array both at the top level and within a repository definition is an array, any of which of the patterns may match some or all of the portion of the inner text, but it is not possible for them to match overlapping text.

It's also important to realize that the begin and end patterns are not part of the body of the rule and so are not included in the sub-matching of the patterns array but rather have separate logic to style them (beginCaptures and endCaptures).

Debugging

If reading (and writing) these grammars is hard, figuring out what is going on - and wrong - is insanely hard. First off, every time you make a change to any of the grammar files, you need to restart the Extension instance. This is done by using Sh-F5 to stop the current instance and then F5 to start a new instance.

It is then possible to see the consequences of your actions. If you're fortunate, you will see visual effects on the screen. If not, or if you just want clarity about what happened, it's possible to bring up the Token Inspector to see what happens. In the extension window, type M-Sh-P to bring up the command window and then type some portion of Developer: Inspect Editor Tokens and Scopes. Selecting this pops up a window which shows which rules were applied at the current location. To choose a different location, simply click there (unless it's under the popup window, in which case you may need to resort to trickery such as clicking elsewhere first or using the keyboard). To dismiss the window, press ESC.

On the upside, every time you restart, VSCode picks up from where it left off, so you don't need to go through the steps of re-opening the relevant windows. It also learns fairly quickly that you want to use the Token Inspector and suggests it sooner. And of course, if you are desperate, you can bind it to a simple keyboard shortcut.

Conclusion

Actually wiring up syntax highlighting was surprisingly easy. Getting the patterns to work was not. The complete lack of any tooling (at least in VSCode) that points out errors or failings was really annoying. Some things that particularly caught me out were forgetting the # when referencing a rule in the repository; not doubling the backslash characters before special characters (such as \b) in regular expressions (but note that this is not wanted for \t, which is a tab character); the begin and end syntax along with the fact that they are not included in the inner patterns; and the fact that regular expressions do not overlap.

I need to spend considerably more time looking into the syntax and trying to figure out how to use it to reasonably describe a quite complex "context sensitive" grammar using this weird mixture of regular expressions and production rules. But that is more for the "real world" than it is for this blog (although I may come back here if I have any wisdom to distill), and in the meantime it is time to move on to the task of integrating with a Java back end.

Sunday, September 20, 2020

First cut at LSP


Building on my previous research, I'm ready to try and do something with VSCode and LSP.

The place to start is by seeing if we can get some existing sample code to build and run. Let's try that.

Doing something

Based on my previous research, I decided to start by checking out Adam Voss' LanguageServer over Java example:
git clone https://github.com/adamvoss/vscode-languageserver-java-example.git
Given that this was based on a Microsoft sample, I checked this out too:
git clone https://github.com/microsoft/vscode-languageserver-node-example.git
But then, reading the README, it transpires that in the meantime, Microsoft have deprecated this (actually, quite a long time ago). Looking at the replacements, it looks to me not so much that the technology has changed as that Microsoft have moved from their samples from separate repositories to a "mono repo". Anyway, I went ahead and checked that out, too.
git clone https://github.com/Microsoft/vscode-extension-samples
This seems to have a number of samples within it, so which one should we pick? It seems that the most basic one is helloworld-sample, so let's start there. There appears to be some level of "getting started" documentation available, although it doesn't start with checking out this repository but using some complicated mechanism to generate your own, and seems to overly complicate matters. But based on all of this, including the instructions in the README, which also seemed a little confusing, this worked for me.
  • In a terminal, cd into the helloworld-sample directory
  • Run npm i
  • In VSCode, select "File > New Window" and then "Open Folder…" and then select the "helloworld-sample" directory
  • From here, push F5 or select "Run > Start Debugging"
  • A new window opens
Now, I haven't (and probably won't) dig into what this extension does, because at the moment I'm not interested, but it offers a new command you can run by doing CMD-SHIFT-P and typing Hello World. This pops up a message in the bottom right corner of the window. OK, very good, we have something working. Close both the VSCode windows (not saving whatever it was we hadn't changed) and we are back to where we started.

Working with LSP

It seems like the sample closest to what I had been looking at before is the lsp-sample. So let's repeat the above process to see if we can make this sample work as well.
  • In a terminal, cd into the lsp-sample directory
  • Run npm i
  • In VSCode, select "File > New Window" and then "Open Folder…" and then select the "lsp-sample" directory
  • From here, push F5 or select "Run > Start Debugging"
  • A new window opens
The documentation says that you need to open a file in the "'plain text' language mode" but I have no idea how to open a file in a particular language mode. So I guessed that what it means is open a file that doesn't have an extension that it recognizes. Fortunately, I have a fair few of those, so I tried one. The important thing seems to be to find a file which does not already have a built in language server in VSCode (e.g. .js or .ts files are a bad choice).

The sample appears to have three features: if you type a 'J' or a 'T' it will offer the completions "JavaScript" and "TypeScript" respectively; meanwhile, if you type an "identifier" that starts with at least two uppercase letters, it flags an error.

OK, that seems to do what I want, except it does it entirely inside node, whereas I want to be delegating all the hard work to Java. There doesn't seem to be an official sample that does that, so I think I'm back to rolling my own based on the model I have from Adam Voss.

Syntax Highlighting

At this point, it seems worth going on a diversion to look into syntax highlighting. As I noted last time, syntax highlighting is handled differently to the language features that depend on LSP and is handled using regular expressions entirely on the client side.

Strangely, there doesn't seem to be a sample that specifically addresses this; I'm not sure why. There are three samples, however, which do include syntax highlighting of which contentprovider-sample seems to be the simplest. This is actually a sample about how you can generate documents and display them in an editor window; the syntax highlighting is just an adjunct that is there in order to make the generated document "look good" (and possibly to define regions to which affordances can be added).

Syntax highlighting is covered thoroughly in the documentation but the thrust is that you need to add a "contribution point" into the package.json file that defines the grammars for the languages you want to define (you also need to define a "languages" contribution point which identifies the languages based on file extensions, but I think that is fairly much a given here). The basic idea is that you identify regions of text and associate each of them with a "scope" (the word scope implies, I believe accurately, that the regions can nest within other regions). In the contribution point, you specify a list of bindings, each of which identifies a language, the path to a JSON grammar file (relative to the extension directory) and the root scope for the language.

The JSON grammar file defines an object representing the grammar of the language. Within this, there appears to be duplication of the scopeName. Since we are referencing the grammar file by name, I don't see how these two can be different, yet both seem to be needed, which is my definition of duplication. The other two values are patterns which is a list of pattern names which can appear at the top level, and repository which amounts to the complete set of productions for the grammar. These can, of course, be recursive, and the upshot is that each identified pattern in the grammar has a chain of matching rules (or scope) which enables VSCode to apply the appropriate themes.

The themes can likewise be introduced as extensions, and the theme-sample shows how this can be done. It would seem that Microsoft for some reason have adopted much of this from TextMate and largely expect you to have existing TextMate themes that you wish to reuse. There does appear to be an alternative scheme for defining a new color scheme which would be compatible with the grammar.

These grammar JSON files do not look easy to define. Since I already have a machine-readable formal grammar for my language, I'm sure I will just end up writing one more tool to convert that into a set of regular expressions for syntax highlighting. It would appear that somebody at or connected with Microsoft has done something similar to generate some of their examples.

Conclusion

Well, I still haven't written any code. But I am starting to get closer to knowing what it is that I need to do, so next time out I'm going to start by copying across parts of these various samples and trying to get something which does an amalgam of them: register a language, syntax highlighting, themes and start a Java LSP server.

Friday, September 18, 2020

Integrating with VisualStudio using Language Server Protocol


In my daily life, I do a lot of work with compilers and programming languages. In the course of doing that, I want to provide a quality editing experience; in the modern era that means things like syntax highlighting, auto completion and rapid feedback on errors. But I don't want to write tools, I want to integrate with them. The question then arises as to which editors to integrate with. As a Java programmer, in the past I have generally used Eclipse, but it is not an easy architecture to plug into.

Recently, I have started using Microsoft's VSCode to do front-end JavaScript, HTML and CSS development: in spite of being a Microsoft product, it actually seems to be quite stable, reliable and sane. Its approach to embedding editing features for new languages is not necessarily to strictly embed them, but to allow for a connection to an external server which delivers the relevant features. This appeals to me because it enables me to write most of the integration code in Java - with which I am familiar and which is the implementation language of the compiler itself - thus making the job easier.

As an added benefit, the Language Server Protocol is supported by a wide array of languages and tools which means that doing this work once provides easier access to a range of tools, although it is still necessary to implement a relatively thin "client" experience for each tool.

So what is the Language Server Protocol?

Basically, the language server protocol is a communication protocol between editing tools and "compilers" which abstracts away the language details and allows the two ends to communicate in terms of the kinds of abstract operations that tools want to perform on language elements - look up definitions, search for usages, complete symbols, etc.

The protocol is a version of JSON-RPC over a lightweight HTTP protocol.

Building a Server in Java

Nobody wants to actually go to all the effort of writing the code to read and write JSON-RPC over HTTP. Fortunately, people have been there before us and done that. For example, in Java there is the lsp4j library which makes it possible to write to interfaces and then have a main method that wires up a server.

Most of the work is involved in implementing the LanguageServer interface and implementing all the methods. The main method then instantiates this and creates a "server" by wrapping this using the LSPLauncher.createServerLauncher() method. In addition to the server instance, this method requires an input stream and an output stream. Where do these come from?

This is where the genuine connection to the client comes in. You need a physical transport layer - most likely a socket connection - from which you can extract a stream in each direction. In passing these to the "server" you enable it to read requests and write responses.

Finally, there is a little bit of magic in wiring up the "remote client" interface (by which the server communicates back with the actual client) with the server code by having the server implement the LanguageClientAware interface.

Embedding a Connector in VSCode

Integrating with VSCode is not quite as simple as merely implementing a server. For a variety of reasons, VSCode requires a "beachhead" on the client side to handle the communication with the server.

The process is described in the VSCode documentation. Apart from anything else, this defines the capabilities of the language server and provides the implementation of the connection layer including starting up the "remote" (from the VSCode's point of view) server.

Additionally, some of the functionality associated with a language (such as syntax highlighting) is not implemented over the server protocol at all. Syntax highlighting, for example, is implemented strictly on the client side using regular expression matching. I can't say as I like it - parsing is so much more sane - but it sadly common to most tools.

A Simple Example

For this post I am only going to offer a little code and, breaking with tradition, it is not even my code. Microsoft offers an example of how to connect to another node.js based language server but the late Adam Voss ported this to use a Java server.

Reversing the order of presentation from the sections above, I am going to start with the client side (i.e. the code embedded in VSCode). Obviously, this needs to be node.js compatible and, since we are in Microsoft-land, that means typescript (although I believe it is possible to use JavaScript if you insist).

The client code

Each client needs to have a manifest associated with it, in package.json. Most of this is, of course, vanilla node.js/npm configuration: setting up dependencies and the like.

The key elements seem to be engines, activationEvents and configuration. These are described in some detail in the developer guide and I don't think I have very much to add to that at this point. Obviously, the engines describes the versions of VSCode with which the plugin is compatible; activationEvents describes the portion of the protocol that the plugin implements; and configuration covers the rest of the concerns, including (it would seem) allowing the plugin to introduce settings which the user can then configure.

What is not configured - and is therefore presumably implicit - is how the client is configured. It would seem that the module needs to export a single function called activate which receives an ExtensionContext and is responsible for creating a new LanguageClient object (defined by a Microsoft library) and, after configuring it as appropriate, calling start on it. The cunning, of course, is all in the options parameters that are passed to it.

Moving on to the Java example, we can look at the equivalent extension.ts file in this repository.

Starting towards the bottom (line 76), the language client is created and started (all on one line). The client options look fairly vanilla, but in lieu of the server options, there is a function name. For full disclosure, I haven't so much as cloned this repository yet, so for all I know it doesn't even work, but at the same time I know that JavaScript - and typescript, presumably - will accept a function as a parameter and then call it when it needs the value. I'm assuming that is what is going to happen here. It is worth noting, on the other hand, that this repository is a couple of years out of date, so it is also possible that it is using a no-longer-supported feature.

Anyway, assuming that it is right, it is passing in the function which takes up most of the module (lines 17-60). Again, confusingly, it returns a Promise of a StreamInfo, not the LanguageServerOptions I was expecting. But no matter.

First, it creates a socketpair which, on completion, resolves the promise by providing the reader and the writer. It also listens for the socket to be closed and reports that to the console (it doesn't actually close anything, which seems surprising, but it is possible that somebody else catches that). It connects the listen event to a handler which starts a java process (the server), telling it the port number which has been opened.

I have to admit that there are a number of things going on here which don't seem exactly right to me - but that is probably because I don't understand enough about how the node.js net.server abstraction works.

The server side

The server is a fairly simple and brain-dead Java application. On the flip side, it doesn't do very much.

The main code is in App.java. Basically, this reads the port from the arguments, creates a client around it, and then does the work to set up an LSP server using the streams and an ExampleLanguageServer.

This implements the minimal number of methods to implement a TextDocumentService, although for reasons I don't understand, the actual implementation is split between a class FullTextDocumentService and an inner class inside the language server class.

The server has methods to initialize, connect, shutdown and exit the server, as well as to return the implementation of the FullTextDocumentService. It also provides an implementation of the WorkspaceService which appears to be responsible for handling user configuration changes.

Conclusion

I've learned a lot about VSCode and the Language Server Protocol that I didn't previously know and having saved away the links, I am hoping this will be of use to me when I return to actually try and implement something.

The next step is obviously to clone these various repositories, bring everything up to date, get it to work as is and understand it a little better. After that, I will need to try and understand the breadth of the protocol before trying to connect an actual compiler.

Expect to hear more.

Monday, September 7, 2020

Chaos


This weekend I was re-reading John Gleick's excellent book Chaos and was reminded of my youth when a team of researchers and Bristol University were working with the Mandelbrot set as an example of a "highly parallel" problem. The focus of their research was the Inmos Transputer and the Occam programming language. In those days, this was a compute-intensive problem.

The modern-day inheritor of that kind of parallel-programming architecture is the GPU. It occurred to me to try and use a mandelbrot generator as a way of experimenting with low-level GPU programming, but in the end I was more interested in how much more powerful my current Macbook Pro is than all that hi-tech architecture of the late 1980s. TL;DR: you can't believe how much.

Implementation

So I set about building a quick-and-dirty implementation of a chaos field generator. I wanted to go a little bit beyond just the Mandelbrot Set and consider other examples too - such as the attractors for the various solutions to n3 -1 = 0 using Newton's Method.

"Obviously" I chose to do all of this in JavaScript. The main reason being that all the code runs in the browser (using a 2d canvas), and that makes it easy to distribute. Moreover, JavaScript has the ability to plug-and-play with different functions, and I used that to control what you see.

The code can be found in the usual place under the directory chaos, and the "results" are up on my website at http://www.gmmapowell.com/chaos/.

Conclusion

You may be able to, but I cannot believe how much more powerful our computers are today than they were 30 years ago. Things that took forever on the latest, most expensive equipment can now be done in an interpreted language on a regular laptop (or even a phone!).

Wednesday, September 2, 2020

A More Functional Database

I have recently been using DynamoDB and, to be frank, its API (at least in Java) is, IMHO, awful. So awful, in fact, that in wrapping the API as described below, I was forced to write a mini-wrapper to call from within the wrapper.

Obviously, I was not going to put up with this. But rather than launching in and doing "something", I sat back and considered what I really wanted. If you know me or read this blog frequently, you would probably fairly rapidly come up with a list something like this:
  • Functional or functional-leaning;
  • Tell-Don't-Ask;
  • Transactional;
  • Asynchronous.
Now, as it happens, these things are not really in conflict in this case, although I ended up with three APIs:
  • A mini-wrapper I had to create just to tidy up the $DynamoDB$ interface;
  • A low-level "tell-don't-ask" API; and
  • A more transactional API with a functional feel.
What I really want to talk about here is the last of these; the other two are really just stepping stones on the way, although, for full disclosure, it was only when I had used the TDA API for a while and experienced the pain of doing it, that the third API occurred to me.

While I’m fully disclosing, this code didn't start here as an experiment, but grew organically out of one of my many “real life” complex and messy projects across a swathe of production code intended to support multiple databases as well as other integrated systems; for simplicity I’ve rewritten history a little and pulled out all the relevant classes for DynamoDB and assembled an example test case that steps through the basic CRUD flow using the functional API.

What am I thinking?

First things first. What do I even mean by a "functional" database? Almost by definition, databases are the opposite of functional. The very way we think about - and describe - databases is CRUD: a sequence of read/update operations with occasional creation and deletion to keep life interesting.

Given my long background in functional programming, a lot of things have been percolating in my brain over the decades and recently have experimented a little with (the somewhat-functional) FaunaDB (which I will eventually get around to writing up here) and in doing that I noticed something of a similarity between functional programs and how I might ideally write database logic.

Here is some functional code:
top = repeat x '*'
x = 5
repeat 0 c = []
repeat n c = c:repeat (n-1) c
Nothing particularly special here. But there is a rhythm to functional programs, which can be described as "define something; use it; repeat". I've often described writing functional programs as "do a little bit of the work now and push the rest off onto another function".

On the other hand, what drives a functional program is the fact that somebody somewhere wants to know a result and expresses this somehow. Generally, this either comes from a "main" method or from some kind of console or REPL. The "top level" expression is broken down into sub-expressions which are evaluated in turn until the basic blocks are reached.

(As an aside, TDA almost exactly inverts this logic: it takes all the basic blocks and says that the results should be sent to a consolidator that combines them and then promotes them to the next level until the top level handler is reached.)

So the question is: how does this relate to databases?

A "pull" model for databases

The normal way of dealing with databases is as an imperative begin-read-write-commit loop. But this transaction can also be viewed as a single operation in a REPL - the model used by functional programs. In this model the transaction becomes:
  • Decide what you want to do
  • Do all the reads and transformations
  • Do all the writes in a single step
How is this different? Most importantly, an important topic in standard database theory isread your writeswhich basically says that there is a choice when you have done a write in a transaction about whether when you use "the value" again - particularly if you choose to read it back from the database - whether you see the version you wrote or the one that existed before your transaction started. By deferring all the writes until the end of the transaction, we avoid this conundrum, which is exactly what you would expect from a functional model, in which values do not change over the course of a function's evaluation.

The other major change is in the way in which we describe the steps of the logic. As with a functional program, we name each value that we read in; as we name it, it becomes available for other steps in the logic. For example, a transaction which reads two values and then writes the result might ordinarily look like this:
begin tx
x = get 'A'
y = get 'B'
z = x + y
put 'C' z
end tx
In a functional notation, we might write something like this:
tx = [Store 'C' z]
z = x + y
x = get db 'A'
y = get db 'B'
Now, this is neither purely functional or really executable; but the idea is that we are describing the transaction rather than actually running it.

Handling the impedance mismatch with imperative languages

Now, while I want to use a "more functional" approach to databases, I in fact still want to do this from within Java - an imperative language. How do I handle that?

It's actually not all that hard. Java has functions, and I can say that each of my "steps" can map to a function. I have an entry point, which kicks off the initial GET operations and identifies the logic operation (z) that I want to be invoked when both of the return values are available. I then define z and annotate its parameter arguments to say where they come from. Basically, the transaction becomes a state that the functions can access through the parameter annotations. Once a value is read or calculated, it is available and any method that depends on that can now be executed.

Handling asynchronicity

It's obviously very important that the database access be asynchronous - the alternatives are to waste threads blocking or just to kill performance - which aren't really alternatives at all.

This is done by having all the functions call into an asynchronous layer in the database and provide a central place to call back. When they do, the current state is updated with the new value and the list of pending logic calls is examined. Any that are now "ready" - i.e. those that were just waiting for this value to arrive - can now execute, possibly launching more GET or LOGIC requests. Any of these that are ready to execute can be run the moment that the current method returns; others will be deferred until all their arguments are present. Every logic method can also add to the pool of "pending writes".

Eventually, all of the logic is complete and it is possible to do the writes - unless an error occurred in the transaction, in which case none of the writes will be done.

An implementation

The implementation (available in the git repository) is called GLS for get-logic-set/subscribe, describing this alternative loop.

Start with the simple test case (SimpleGLSUsage). The fundamental concept here is the UnitOfWork, which is created in the test initializer initTest. This corresponds to the conventional notion of a "transaction" (I have shied away from the word transaction in part because it is overloaded and in part because the semantics of the underlying database are unspecified; in the case of DynamoDB it is not transactional).

Within a unit of work, it is possible to create multiple parallel Relations. A relation represents a thread of work going on with its own namespace - a scope, if you will, within an outer definition in a functional program. This allows multiple lines of logic to coexist while still being part of the same "unit" - the same values are only read once, have the same value across all relations, and the unit succeeds or fails together.

This simple test case is not a series of unit tests. Rather, it is a "script" for executing tests. To make this work, JUnit 5 is used along with its OrderAnnotations. The first test ensures that we can create a trivial object. The enact method on the unit of work says that we have configured everything, and it is ready to go; waitForResult blocks until the unit has completed - success or failure.

The second test simply attempts to read this object back in.

The third test reads the object back in and then prints it. Because Java does not treat functions as "first-class" objects, the method printHello is referenced as a string in the call to logic. This is the name of a method in the RelationHandler class, which in this case is defined to be this, which is the current class.

The fourth to sixth tests check that the greeting is what we would expect it to be, update it, and check that it has changed. The final test cleans up the record by deleting the record (although if anything goes wrong, the whole thing will be deleted on restart).

It is obviously possible to run these tests, but some setup is required: you obviously need an AWS account with a DynamoDB instance configured; you need to create a table and pass its name to the test in the test.table.name property.

Conclusion

The normal database paradigm fits well with imperative languages, but has the normal drawbacks of those languages - most specifically, very slow, synchronous behaviour. It's hard in imperative languages to break out of that because of the complexity of dealing with all the asynchronous state. Changing the metaphor - and making a central agent responsible for the state management - simplifies the code and improves reliability.

Interestingly, in writing this, I can see that my actual implementation does not map perfectly onto my mental model - my implementation has turned out to be more imperative than my mental model. My entry point is fairly close to the "bottom" of the execution stack - as it would be with a TDA implementation. To match the mental model more closely, the "get" operations should not be invoked directly from the entry point function, but should be their own functions, and each of the functions should be named to match the "value" it produces. Maybe I should try again and report on that experiment.