Vim Language Server Client (LSC) plugin first impressions

After months of problems with CoC: https://github.com/neoclide/coc-eslint/issues/72 I’ve given up and I’m trying other plugins.

I used ALE for ages with eslint, but never got it decently working using LSP. I tried ALE with Deoplete – but ALE is very slow and deoplete took ages to configure (python3 + pip + msgpack 1.0+ is a pain to install).

I came across this blog post showing love for Language Server Client (LSC): https://bluz71.github.io/2019/10/16/lsp-in-vim-with-the-lsc-plugin.html.

But this had a problem that it doesn’t work for linting, only LSP and I need eslint.

However this reddit comment described perfectly the use of installing ALE and LSC.

ALE with eslint, LSC with tsserver (+ LSP compliant wrapper typescript-language-server).

This seems like a good combo. ALE was reliable for ages and slow linting doesn’t matter too much, it’s slow auto-completion that’s a problem. Also ALE doesn’t have the floating window features that CoC and LSC do.

LSC along with ALE is almost entirely Vimscript (with some Dart that doesn’t require a further install) which makes it an easy install – no further dependencies makes plugins much better with Vim.

Installation of raw plugin

First thing that disappoints me currently in the instructions of the plugin is not having the Plugin code to copy-paste, I love where I can blindly follow the instructions with copy paste and it all magically works.

So to install the plugin in your .vimrc or init.nvim for example using Plug or Vundle:

Plug

Plug 'natebosch/vim-lsc'

Then run :PlugInstall

Vundle

Plugin 'natebosch/vim-lsc'

Then run :PluginInstall

Pathogen

git clone https://github.com/natebosch/vim-lsc ~/.vim/bundle/vim-lsc

Then run :Helptags to generate help tags

Dein

call dein#add('natebosch/vim-lsc')

This will work too with any of the other plugin managers that support github plugins as sources.

Installation of typescript language server

However at this point you have a problem because there is nothing to indicate that anything is working. No auto-completion happens, nothing.

If you install Coc, from what I remember immediately auto-completion starts happening.

You can run :LSC<tab> which should open up a list of the possible LSC commands to at least show that you have it installed.

Coc starts working for pretty much every file with an auto-complete using the words in the current file. This is great for seeing the immediate response that you know you’ve installed the plugin correctly. Plus with Coc you can run :CocInstall coc-eslint which starts installing the eslint and then works again immediately from what I remember.

I want to test LSC with the typescript LSP, which I’d already installed with:

npm install --global typescript

Now fair enough but rather frustratingly the typescript tsserver isn’t a fully compliant LSP, so you need the LSP compliant wrapper typescript-language-server mentioned above install globally same as typescript:

npm install --global typescript-language-server

I have two specific requirements which makes my commands differ:

  1. I use yarn
  2. I installed yarn/npm using sudo

So my actual command was:

sudo yarn global add typescript typescript-language-server

Coc mimics VS Code and works with tsserver out of the box which saves you from having to install the extra library. If LSC could be made to work with tsserver it would be a nice step. Coc even goes so far as to install tsserver for you so you just need CocInstall coc-tsserver and the magic starts happening. So you can install and get Coc working without having to leave Vim – the same happens for eslint because typically developers using eslint will already have it in their project and this just gets picked up magically.

The frustration of typescript-language-server is that there is the far too similarly named javascript-typescript-langserver, I have no idea of the difference nor do I really care, I just want the one that works. The LSC documentation for JavaScript language servers fails for this it shows me how to configure both of them but gives me no idea which one I should prefer.

I’m very much a proponent for the “don’t make me think” mantra because that’s what most people are after when they’re trying to install a plugin.

Why go through all the work of writing your plugin in Vimscript only to leave the documentation bare leaving people frustrated.

Configuration

Annoyingly the configuration for Javascript is buried. There’s no mention in the README that there is a wiki that lists all the language server configurations, and even in the wiki the home page is bare, so you have to spot the pages menu on the left-hand side.

Then when you get to the Javascript section you have the previously mentioned problem that there are two servers and you don’t know which to choose.

So I already have tsserver installed and that’s used by VS Code and so is used by 90% of all developers now and so I’ll use that.

let g:lsc_server_commands = {
  \ 'javascript': 'typescript-language-server --stdio'
  \ }

Futher frustration though is that there’s no comments in there giving helpful tips on how to set them up properly. The bluz71 blog above has the useful extra hint:

For LSC to function please ensure typescript-language-server is available in your $PATH.

So you should make sure to add the npm/yarn global installation directory into your path – it’s easy enough to find instructions for this. To test make sure you can run this in the directory where you start Vim:

$ typescript-language-server --version
0.4.0

Obviously you’ll probably get some other version number, but you should at least get a response. You don’t set up a path to the language server binary in the config so it assumes you’ve got it directly available.

That’s all folks… not quite

That should be that. With a restart of Vim the magic should happen – open up a Javascript file, start typing away and BAM, auto-complete pop-ups should start appearing.

However for me it didn’t, patiently typing a few characters and then reading documentation on how many characters to type before something happened did nothing.

I tried a :LSClientGoToDefinition and it spewed out an error:

Error detected while processing function lsc#reference#goToDefinition[2]..lsc#server#userCall:
line    2:
E684: list index out of range: 0
E15: Invalid expression: lsc#server#forFileType(&filetype)[0]

Firstly getting errors is always bad and secondly this error message makes no sense.

The problem here is that there is no ‘health check’ that I could find. ALE gives a very good diagnostics page via :ALEInfo. The LSClientAllDiagnostics and :LSClientWindowDiagnostics that sound like they might be useful aren’t at all in this situation.

Even after reading through :help lsc I did not spot anything to help with spotting issues. But the intro there is very helpful:

There is no install step for vim-lsc. Configure the filetypes that are tracked
by installed language servers in the variable “g:lsc_server_commands”. Each
value in this dict should be a string which corresponds to either an
executable in your “$PATH”, an absolute path to an executable, or a
“host:port” pair. If multiple filetypes are tracked by the same server they
should be entered as separate keys with the same value. The value may also be
a dict to allow for additional configuration.

It was only after re-reading the bluz71 blog again that I spotted my problem:

For a given filetype the LSC plugin will take care of launching, communicating and shutting down the named language server command.

My problem is that because I have the mxw/vim-jsx plugin, my javascript filetype becomes javascript.jsx, so my config needs:

let g:lsc_server_commands = {
  \ 'javascript.jsx': 'typescript-language-server --stdio'
  \ }

Now I did a re-source of my .vimrc via :source % and then tried again with my Javascript file and nothing worked still.

However doing a restart of Vim, now I got an error that flashed up in the Vim command line and disappeared, but then finally the magic started to happen.

So to know if LSC is working, the first thing you notice is that it subtly highlights the word you are on and any other words (‘symbols’) that match that.

Now auto-completion starts working and I can tweak away with key mappings. However I don’t really care about key mappings – they’re easy to tweak.

Final thoughts

This does seem like a great plugin now that it’s working. It has the speed and functionality of Coc and it works which is a major plus point over Coc at the moment.

What I fundamentally care about when trying these LSP plugins is getting something to work as fast as possible so I can test out the plugin. I can then add other language servers and configurations, but until I’ve got something working there’s nothing but frustration.

Advertisement

Embracing the Neovim/Vim Terminal

I’ve only finally just started using Neovim’s terminal. Vim 8 has this too of course, but it was Neovim that championed it and included it by default.

I switched to Neovim about a year ago to see if there was anything missing compared to Vim. I find Neovim fascinating purely because of how faithful it is to Vim. They had the perfect answer to open source – “if you don’t like it fork it” and make your own. Vim itself is healthier because of the additions that Neovim included.

I literally can’t visually tell if I am using Vim or Neovim. You have the one initial setup to use your existing .vimrc file and the rest just works perfectly.

I’ve never had to raise a bug because of some incompatibility. All the Vim plugins I use work.

The sheer scale of porting an editor that has been around for 30 years to implement what appears to be 99.9% of it’s functionality is amazing.

First steps

One of the big things of Neovim was its inclusion of a terminal.

I’d tried the terminal a couple of times but I found it jarring, it’s like the Vim experience all over again – you’ve no idea how to quit.

You need a combination of Ctrl + \, Ctrl + n, which is just as insane as :wq for those who don’t know Vim. I don’t know whether it’s an intentional homage, but all I know is that it scared me off for a year.

Somewhere during that year I tried tentatively again and added this (probably from this Vi StackExchange answer) as suggested to my vimrc:

" terminal easy escape
tnoremap <Esc> <C-\><C-n>

But then I just left it there. I always just kept a separate terminal open to run the commands I needed there. This currently is running a bunch of node docker containers. This obviously means alt + tab between the windows, or ctrl + pg up/pg down if you have terminal tabs (I’m using Ubuntu mostly).

However I kept seeing my colleagues running their terminal session within VS Code. I’d roll my eyes at the wasted screen space, but it did seem kind of natural, you have another step in your development process combined into the one editor.

I was always a fan of “Linux is my IDE” so I also didn’t see a problem of switching to the terminal, it also feels ‘fairly natural’ as both Vim and any other terminal programs are still running in the terminal, so I saw it as a benefit of Vim.

It always seemed natural that if you’re running a terminal process you should open up an actual terminal, not some emulator, get yourself the real thing.

However there’s a few niggles:

  1. Searching the terminal output, there are ways but they aren’t as nice as the Vim searching
  2. Scrolling naturally around the output, to the top and back to the bottom – especially when its 10,000+ lines long of log output
  3. Copying and pasting between the terminal output and Vim
  4. It janks either wasting the screenspace of the terminal tabs or having to alt + tab to search for the other terminal window

Welcome back

So I dived in again a couple of days ago. Having the Esc remapping is so natural, I’d forgotten that I’d added it to my vimrc and was simply the first thing I’d tried to get out. I had to go searching for it again just cause I remembered the frustration of quitting from before and didn’t understand why it was so easy.

But now, suddenly it’s awesome. Once you’ve hit Esc it’s just another buffer. It can hide amongst my hundreds of other buffers, so no screenspace wasted. This obviously assumes you’re using Buffers not ‘Tabs’ (aka Tab Pages).

It resolves all the problems above:

  1. Searching – I now have my standard \ command to search and highlight plus grepping/ripgrep/fzf
  2. Scrolling – now again all the gg top and G bottom commands are so much nicer to use, I never feel the need to use the mouse to scroll aimlessly amongst the text, I can always use Ctrl + f/g and j/k too – all the lovely Vim commands are now available
  3. Copying and pasting is just Vim buffers no need for the external clipboard
  4. As it’s all just buffers it’s easy to switch to with buffer switching.

So where as I understand it’s just an emulator, not the real terminal, but bash is kind of designed to emulated so not a real problem I guess and now I get terminal + evil mode!

So thanks again Neovim for adding this in.

Steps of caution

There are some peculiarities of Neovim terminals to get used to.

  1. Esc is sometimes used in the terminal for genuine reasons, for example cancelling an fzf request – this now won’t work. My solution to this has been to have the escape sequence be <leader><Esc> this allows escape to still work, but be still easy to quit out.
  2. The inception of running Vim via an SSH session inside a Vim terminal has a few niggles, but handles pretty well.
  3. It’s a bit hard to re-find your terminal if you’ve got a couple open – you need to remember the buffer number mostly
  4. If you close Vim you’ll kill all the processes running there without warning

Getting started with ReasonML

I write this as a stumbling fumbling guide for how to actually get started, when all you have is a Javascript and React backgroud with no knowledge of OCaml.

I’m using a combination of Fedora and Ubuntu.

Obviously start here: https://reasonml.github.io/

Installation

I even went off the rails at this point because you need globally installed bsb support and you run into the sudo/not-sudo argument. Lots of people want you to install yarn/npm as non-sudo, but I do it as sudo.

The guide suggests that you do it without sudo and makes no mention of using sudo. So for people like me the documentation goes wrong from the very first step. This makes me sad.

The issue being that because I install npm with the Fedora package manager it’s installed as sudo. So I need to run sudo npm install -g which is all fine and good, but some well meaning people rightly express that this should be avoided if possible, but in the case of Fedora this is unavoidable.

I wrote this all up in an issue #2168, with the most relevant comment I made:

In the npm troubleshooting guide and Grunt getting started guide they have the following advice for global installs:

(You may need to prefix these commands with sudo, especially on Linux, or OS X if you installed Node using its default installer.)

I also hit the same problem with my ubuntu install.

If I run without sudo then the yarn add completes fine, but I get the following error if I try to run bsb:

Command ‘bsb’ not found

So I have to run:

sudo yarn global add bs-platform

However now, you have to run bsb with sudo too. Otherwise you get the following error:

$ bsb -init hello -theme basic-reason
Making directory hello
npm WARN checkPermissions Missing write access to /usr/lib/node_modules
npm ERR! code EACCES
npm ERR! syscall access
npm ERR! path /usr/lib/node_modules
npm ERR! errno -13
npm ERR! Error: EACCES: permission denied, access '/usr/lib/node_modules'
npm ERR!  { [Error: EACCES: permission denied, access '/usr/lib/node_modules']
npm ERR!   stack:
npm ERR!    'Error: EACCES: permission denied, access \'/usr/lib/node_modules\'',
npm ERR!   errno: -13,
npm ERR!   code: 'EACCES',
npm ERR!   syscall: 'access',
npm ERR!   path: '/usr/lib/node_modules' }
npm ERR!
npm ERR! The operation was rejected by your operating system.
npm ERR! It is likely you do not have the permissions to access this file as the current user
npm ERR!
npm ERR! If you believe this might be a permissions issue, please double-check the
npm ERR! permissions of the file and its containing directories, or try running
npm ERR! the command again as root/Administrator.

npm ERR! A complete log of this run can be found in:
npm ERR!     /home/channi16/.npm/_logs/2020-05-02T11_56_03_062Z-debug.log
failed to run : npm link bs-platform

So you have to run:

sudo bsb -init hello -theme basic-reason

The problem with this is that it then creates the hello directory and all its contents as root. However you don’t always need to use sudo. It seems like it’s either the first time you use it, or perhaps when you use a new template. However after than you can run bsb -init my-dir -theme basic-reason without sudo. It’s not even when using a new theme, you can init with un-used themes. It appears to be just the first time that you run bsb -init that it requires sudo.

Editor plugins

The advice here is quite simple. Use VS Code unless you’re like me and you want your freedom. In that case, if you use Vim, then you’re in luck. I’ve wasted the 10 hours of time for you trying all the different combinations of plugins. See my previous blog post on Using ReasonML with Vim / Neovim.

Initial coding of the demo project

I started with the demo project, this builds a Demo.re file to a Demo.bs.js file that can be run in node.

React our way up the tree

I wanted to get display into the browser – but the basic demo is node only. I want raw JS.

The react example gives raw JS output – but it has to go through webpack.

Kinda annoying – but I guess the webpack converts the bucklescript node javascript into browser style javascript.

Installing the demo

I ran the following to install the create-react demo:

bsb -theme react -init neural-network-re

Running the demo

This installs a runnable demo. I of course hit an error when running the code:

[ian@localhost neural-network-re]$ npm run webpack

> neural-network-re@0.1.0 webpack /var/www/vhosts/reasonml/neural-network-re
> webpack -w

/var/www/vhosts/reasonml/neural-network-re/node_modules/webpack-cli/bin/config-yargs.js:89
describe: optionsSchema.definitions.output.properties.path.description,
                                           ^

TypeError: Cannot read property 'properties' of undefined
    at module.exports (/var/www/vhosts/reasonml/neural-network-re/node_modules/webpack-cli/bin/config-yargs.js:89:48)
    at /var/www/vhosts/reasonml/neural-network-re/node_modules/webpack-cli/bin/webpack.js:60:27
    at Object.<anonymous> (/var/www/vhosts/reasonml/neural-network-re/node_modules/webpack-cli/bin/webpack.js:515:3)
    at Module._compile (module.js:653:30)
    at Object.Module._extensions..js (module.js:664:10)
    at Module.load (module.js:566:32)
    at tryModuleLoad (module.js:506:12)
    at Function.Module._load (module.js:498:3)
    at Module.require (module.js:597:17)
    at require (internal/module.js:11:18)
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! neural-network-re@0.1.0 webpack: `webpack -w`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the neural-network-re@0.1.0 webpack script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:
npm ERR!     /home/ian/.npm/_logs/2018-11-26T22_05_12_616Z-debug.log

It’s so frustrating when things don’t ‘just work’. Anyway it turns out to be a webpack error that they fixed in the project back in October, the fix is to upgrade the webpack-cli version in the package.json.

Also it looks like I’m running an old version of bsb as it appears to install an old version of the project. My version has the stupid idea of trying to get hot reloading running first, rather than a simple build.

It looks like in the new version they simplify it and just run the regular build. At least then it’s obvious that it’s a webpack error rather than some weird compilation error. It’s now just:

npm install
npm run build
npm run webpack

Then open src/index.html

That is pretty sweet! Got me a lovely simple index.html page up and running.

Hot reloading

Now we can attempt the hot reloading.

npm run start

Then in a separate terminal:

npm run webpack

This then hot reloads the code – but it doesn’t auto refresh the webpage – but at least I understand it.

Steps towards a Neural Network

My reasons for doing all this is to convert a simple Neural Network I wrote in JavaScript into ReasonML.

Now the first thing I want to try and do is get some SVG displaying, this isn’t so simple because you need to do it in JSX that gets output to god knows what that eventually produces an SVG.

However via a forum post there is a link for how to display a basic SVG. The slightly cryptic thing in there is that it doesn’t display anything to the screen, it just creates the SVG in a variable.

I made some modifications to the demo code to it to output a simple SVG circle and then got the SVG displaying to the screen too

let m =
  <svg
    width="200"
    height="200"
    viewBox="0 0 200 200"
    xmlns="http://www.w3.org/2000/svg">
    <circle
      cx="5"
      cy="5"
      r="5"
      style=(ReactDOMRe.Style.make(~fill="black", ()))
      xmlns="http://www.w3.org/2000/svg"
    />
  </svg>;

ReactDOMRe.renderToElementWithId(m, "target");

The online tool (https://reason.surge.sh) is quite interesting because you get a lot more tooltips than I’m used to (this is because it has the language server plugged in). Also the fun with displaying the style attribute which was actually relatively painless. The ReasonReact Docs for Style gave me the perfect example:

<div style=(
  ReactDOMRe.Style.make(~color="#444444", ~fontSize="68px", ())
)/>

So now I’ve got an SVG circle appearing in the web version and I can use this in my local version without problem.

Dom dom dom…

Ideally for my basic project I didn’t want to use React. I have a simple HTML page that injects a few <p> tags and some <svg> circles. I do this easily enough with pure JavaScript, so I shouldn’t need React.

However trying it was very painful. Every DOM Element in Reason is an Option. So you have to spend your life with this ‘could be null’ chap and handle this could be null at every stage.

Basically it seems like the DOM is a type nightmare and trying to apply correct types to it produces your own personal hell.

  let _ =
    Document.getElementById("root", document)
    ->Belt.Option.map(_, Element.setInnerText(_, "Hello"));
  ();

  let el =
    document
    |> Document.createElement("p")
    |> Element.asHtmlElement
    |> unwrapUnsafely;
  /* let root = Document.getElementById("root", document); */
  document
  |> Document.getElementById("root")
  |> map(Element.appendChild(el));

The above is the equivalent of:

var root = Document.getElementById("root");
root.innerText = "Hello";

var el = Document.createElement("p")
Document.getElementById("root").appendChild(el);

But trying to create and element, set it’s innerText and then append it was beyond me after 8 hours of coding.

This is along with using the experimental (but interesting) library bs-webapi, which is also referred to in a useful book on Web Development with ReasonML.

It’s at this point I figure that ReasonReact seems to handle the Dom interaction better. I was able to get a fairly complex svg element displaying which is the hardest part. So perhaps just better to stick to that.

Hidden pipes

Note that here they use the “reverse-application” operator |>. There’s an excellent description about this in this 2ality blog post on ReasonML functions:

The operator |> is called reverse-application operator or pipe operator. It lets you chain function calls: x |> f is the same as f(x). That may not look like much, but it is quite useful when combining function calls.

In the reason docs they refer to the -> operator instead in the Pipe First section:

-> is a convenient operator that allows you to “flip” your code inside-out. a(b) becomes b->a. It’s a piece of syntax that doesn’t have any runtime cost.

It seems that -> is a subtly different operator from |> as the only reference to |> appears later on that page:

We also cannot use the |> operator here, since the object comes first in the binding. But -> works!

Frustratingly though the reason documentation just includes this mention of |> without actually saying what it is. The operator comes direct from OCaml, see ‘Composition operators’ in the Stdlib documentation.

Note that there is the similarity to the Javascript pipeline operator.

Belt

Also note the use of Belt.Option.map above. The Belt module is a community driven helper module:

A stdlib shipped with BuckleScript This stdlib is still in beta but we encourage you to try it out and give us feedback. Motivation The motivation for creating such library is to provide BuckleScript users a better end-to-end user experience, since the original OCaml stdlib was not written with JS in mind.

Easier DOM

Trying the latest bsb -init my-react-app -theme react-hooks includes the following snippet in src/Index.re:

  [@bs.val] external document: Js.t({..}) = "document";
  
  let title = document##createElement("div");
  title##className #= "containerTitle";
  title##innerText #= text;

which generates:

  var title = document.createElement("div");
  title.className = "containerTitle";
  title.innerText = text;

Now this looks a lot simpler. But of course I’ve no idea what the ## or #= operators do.

That’s as far as I’ve got, but at least now we have code that looks like recognisable JavaScript.

Using ReasonML with Vim / Neovim

Here’s my attempts to get ReasonML working within Vim and the journey it took me on to understand what language servers are. If you’re using Vim this is essentially step 2 of the ‘quick start’ guide for reason: editor plugins.

Warning to all who enter here… this took me 10+ hours to fix. It’s because my setup doesn’t mesh exactly with their expected setup. If you want an easy life just use VS Code. Failing that if you want an easy life with Vim, just use the following setup:

This assumes that you can install the LanguageServer which you have to do for VS Code as well.

Vim support

By default for using linting I have the following setup:

  • Vim 8
  • Vundle
  • ALE (Asynchronous Lint Engine)

This has managed to work ‘OK’ for Javascript with a bit of linting. But I never got fully fledged Language Server Protocol (LSP) integration working, I don’t really know why.

My initial attempts with my default setup were a total failure. Mostly because I find the LSP concept hard to understand. ALE should be a LSP ‘thing’. So it should be able to act as a Language Client and talk to Language Servers.

The repeated issue I come across is that to get LSP working you need the vim-plug Vim plugin manager. I don’t particularly know why, I guess it works better for these more complex plugins.

Looking through my .vimrc file it looks like I tried to get ALE to work as a LSP.

I switched to Neovim as part of this to minimise the number of plugins that I’m installing.

Switching from Vundle to vim-plug

Until now I’ve always used Vundle, it’s a good basic plugin. But I keep hitting plugins that don’t have instructions for Vundle and just vim-plug. vim-plug seems as good as Vundle, so let’s try it and see how much work the conversion is.

It’s actually ridiculously easy, much respect for vim-plug, and probably Vundle for both having very similar and easy to replace syntax. I replaced the Vundle lines at the start:

-filetype off                  " required
-set rtp+=~/.vim/bundle/Vundle.vim
-call vundle#begin()
-
-Plugin 'VundleVim/Vundle.vim'
+call plug#begin('~/.vim/bundle')

Then I replace all Plugin with Plug and then at the end:

-call vundle#end()
-filetype plugin indent on    " required
+call plug#end()

Then I ran :source % and :PlugInstall and it magically installed all my 31 plugins in 10s and they all seem to be magically working. If you use the .vim/bundle installation directory then vim-plug doesn’t even need to install anything.

There’s some useful instructions in the vim-plug wiki on Migrating from Vundle.

Magically also my installation of deoplete worked correctly. So now deoplete pops up all the time as I’m typing – I’m assuming it’s possible to make it less in your face…

if has('nvim')
  Plug 'Shougo/deoplete.nvim', { 'do': ':UpdateRemotePlugins' }
else
  Plug 'Shougo/deoplete.nvim'
  Plug 'roxma/nvim-yarp'
  Plug 'roxma/vim-hug-neovim-rpc'
endif

Because I’ve switched to neovim it simplifies that installation as well.

But I’ve now fallen into the neovim only trap. Because my installation for deoplete only works with neovim, then I get an error if I try to use vim now. Perhaps I can just comment the whole block for if deoplete installed out. This works, so now vim-plug just doesn’t use deoplete if I’m in Vim.

Note that Deoplete should start working immediately after you install it and you should start seeing a popup box as you type.

Installing the language server

This was just downloading the zip from the language server releases page and unzipping it to ~/rls-linux.

Figuring out how to get ALE to work with reason-language-server

I’m using ALE rather than the recommended autozimu/LanguageClient-neovim. ALE is also a Language Client and so should also work fine.

Install via:

Plug 'dense-analysis/ale'

ALE needs to know about the existance of the reason-language-server and thankfully that support has been added. You can go to :help ale-reasonml-ols and it tells you the correct config:

  let g:ale_reason_ls_executable = '~/rls-linux/reason-language-server'

After this I expected to restart Vim and it to magically work which it didn’t.

The path has to be absolute (as noted in the config information for the LanguageClient-neovim in the README):

  let g:ale_reason_ls_executable = '/home/ian/rls-linux/reason-language-server'

You can also use the magic of vimscripts expand to handle it for you too:

  let g:ale_reason_ls_executable = expand('~/rls-linux/reason-language-server')

This then showed promise. Completion would work nicely and you get useful information coming up as you type, this also integrated with ALE via:

call deoplete#custom#option('sources', {
\ '_': ['ale'],
\})

It took a while to see if all the functionality was there. It pickedup linting errors which are put into the location list (:lopen) and would give useful info with :ALEHover

I compared it to VS Code, which does manage to implement this better. The error messages get formatted properly, for some reason the line breaks in the Vim error messages don’t get applied. Also you get the ‘Hover’ (equivalent to ALEHover) info showing up as you type in context.

I then tried to format the code. ALE has a :ALEFix command that I know works from eslint. It helpfully suggests that you need to configure the correct ‘fixer’ in .vimrc. However once that has been configured running ALEFix does nothing. I installed VS Code to check that using that along with the reason-language-server does correctly format the code – which it does. So there appears to be some problem with ALE.

Time to try another plugin…

So I can either try the Rust made LanguageClient, or the Typescript COC.

I’ve heard about COC a couple of times and it’s designed to be a VS Code matching ‘LSPy thing’. So you should be able to configure Language Clients almost exactly as they do with VS Code but inside Vim.

But hey, I prefer Rust so let’s try that first…

Trying LanguageClient-neovim instead of ALE

Now that I’ve switched to vim-plug installation of LanguageClient-neovim is easy because it includes the manual install step in the .vimrc code.

Then also the configuration was easy because it’s included in the vim-reason-plus README:

let g:LanguageClient_serverCommands = {
    \ 'reason': ['/absolute/path/to/reason-language-server.exe'],
    \ }

Note again here, that it mentions ‘absolute path’, so you have to use /home/ian instead of ~, although I still just use expand('~/path')

Now once I had installed that and restarted NVim then it all worked pretty smoothly.

Firstly the errors show up in the quickfix list :copen as well as to the right of the code. It’s not quite formatted as nicely with the line breaks as VS Code but is at least a consistent block of text, not spaced out with extra padding where the line breaks should be. I suspect though that using the location list is better as it won’t overwrite any searches that I’ve done.

Interstingly the ‘quickfix list’ is actually supposed to show errors according to :help :copen. So perhaps my searches should be in the location list. This can be changed via:

    let g:LanguageClient_diagnosticsList = 'Location'

The commands for the LanguageClient are more confusing though.

:ALEHover vs call LanguageClient#textDocument_hover()<cr>
:ALEFix vs call LanguageClient#textDocument_formatting()<cr>

But you can fix that through the vimrc mappings, but now happily the formatting did work which is what I want. Shit works without too much hassle.

Try getting it to work using COC

One nice property of COC is that it doesn’t use any advanced features of vim-plug, so this will probably all still work with Vundle. Also it looks like it combines the Auto-completion and LSP in one plugin.

Install nodejs – you can follow the instructions in https://github.com/neoclide/coc.nvim/wiki/Install-coc.nvim.
Remove Deoplete and LanguageClient-neovim and any settings from your .vimrc file. Then add COC:

Plug 'neoclide/coc.nvim', {'branch': 'release'}

Then reload the vimrc with :source % and run :PlugInstall. Warning you’ll get errors if you haven’t cleared all the deoplete settings. You might still need to restart Vim.

Install the reason extension

COC works similarly to VS Code in that it requires you install extensions to get it to work with certain language servers.

Install the reason extension, which is assuming that you are using the reason-language-server rather than the OCaml or Merlin LSP:

:CocInstall coc-reason

I ran this and it froze my Neovim. Closing and re-opening the terminal got it working again, but not good.

Now you’ll need to configure it as you always need to specify the absolute path to the reason-language-server. Handily there is a configuration section for Reason

Run :CocConfig and if it’s empty (because this is the first time of using it), then you first need to insert an empty root object:

{
}

Then put the following config inside the root object:

  "languageserver": {
     "reason": {
      "command": "/absolute/path/to/reason-language-server",
      "filetypes": ["reason"]
     }
  }
Troubleshooting

Don’t forget to put the correct absolute path for the command. Here now you can’t use the vimscript expand, so just use /home/[username].

e.g.: "/home/ian/rls-linux/reason-language-server"

Otherwise you’ll get:

[coc.nvim] Server languageserver.reason failed to start: Command “reason-language-server” of languageserver.reason is not executable
: Error: not found: reason-language-server

As soon as you set the correct path you should immediately start getting auto-completion and LSP goodies in your reason files.

Another possible error you can get is:

[coc.nvim] Server languageserver.reason failed to start: Cannot read property ‘reader’

This is a connected error that means you have the command path wrong. In my case I’d just written "/home/rls-linux/reason-language-server"

Further I tested it on a basic reason file created in an empty directory which gives this error:

[coc.nvim] No root directory found

This appears to be a reason-language-server issue #334. You need to initialise the directory as per the installation page.

The auto-completion seems to work very nicely, but I noticed that the error messages are severely truncted, for example:

src/Index.re|7 col 34 error| [undefined] Error: This expression has type [E]
src/Index.re|7 col 59 error| [undefined] Error: The function applied to this argument has type [E]

Actually you can get the full error, using :call CocAction('diagnosticInfo'), or :call CocAction('diagnosticPreview').

This gives a better error than I got with ALE or LanguageClient-neovim:

[reason] [E] Error: This expression has type
(~message: string) =>
ReasonReact.componentSpec(ReasonReact.stateless,
ReasonReact.stateless,
ReasonReact.noRetainedProps,
ReasonReact.noRetainedProps,
ReasonReact.actionless)
but an expression was expected of type
ReasonReact.component(‘a, ‘b, ‘c) =
ReasonReact.componentSpec(‘a, ‘a, ‘b, ‘b, ‘c)

To get code formatting working you need to run: :call CocAction('format'). So its similar to the LanguageClient-neovim in that you’ll probably want to create a whole bunch of vimrc shortcuts. But at least it does format which is a step up from ALE.

Hover info is through :call CocAction('doHover').

One nice thing about this plugin is that it combines the Complete and LSP plugins and so the Plug config is:

Plug 'neoclide/coc.nvim', {'branch': 'release'}

Instead of:

Plug 'autozimu/LanguageClient-neovim', {
    \ 'branch': 'next',
    \ 'do': 'bash install.sh',
    \ }

" for neovim
if has('nvim')
  Plug 'Shougo/deoplete.nvim', { 'do': ':UpdateRemotePlugins' }
" for vim 8 with python
else
  Plug 'Shougo/deoplete.nvim'
  Plug 'roxma/nvim-yarp'
  Plug 'roxma/vim-hug-neovim-rpc'
  " the path to python3 is obtained through executing `:echo exepath('python3')` in vim
  let g:python3_host_prog = "/absolute/path/to/python3"
endif

So your choices are…

Note these are recommendations, you probably can get Vim/Vundle to work with these, but I’ve simplified my life to match the instructions that maintainers give out.

  1. Neovim, vim-plug, ALE, deoplete: format doesn’t appear to work, error messages are very ugly, deoplete has some nice integrations with fzf
  2. Neovim, vim-plug, LanguageClient-neovim, deoplete: has the most nice touches and seems to work the best with little config, format works, but error messages are still pushed into one long line that can be too long for the location list
  3. Vim/Neovim, vim-plug, COC: simplest plugin setup, but took a while to understand how to configure, doesn’t work so nicely out of the box, format works, appears to give the best formatted error messages – when you look for them. It’s easier if they just appear in the location/quickfix list. However I guess that’s the problem, location/quickfix don’t allow for multiline error messages. Also in this seems the most light weight of the plugins as it doesn’t bundle all possible language server configurations, you install them as extensions.

Futher work with COC

I’ve since had further thoughts with COC. Things that are actually fairly magical.

  1. If you’re a Javascript developer then working with eslint is very common. The coc-eslint plugin magically works with eslint straight off. I had all sorts of problems with eslint and ALE, which required eslint_d and neomake for reasons that I can’t quite remember.
  2. By default the errors in COC don’t show until you switch from insert mode to normal mode. This is actually a better experience in my opinion as it reduces the amount of constant information that you’re getting. There’s no need to display an error just because you haven’t typed something yet.
  3. So it means that I can replace, ALE, Neomake, Deoplete, LanguageServer-neovim with COC. COC requires node to be installed but beyond that there’s no specific vimrc config, so should allow using Vundle – although it does require a specific branch which I don’t think Vundle can handle. Also without Deoplete there’s no difference between using Vim or Neovim which is great.

A dive into the many faces of manifolds

What are manifolds? I started down this rabbit hole because of Chris Colah’s excellent Neural Networks / Functional programming blog post:

At their crudest, types in computer science are a way of embedding some kind of data in n bits. Similarly, representations in deep learning are a way to embed a data manifold in n dimensions.

So the problem is that people talk about them even when you don’t think you need to care about topology, or even really know what topology means.

The following is a bit of a dive into examples of manifolds with the intention of better understanding what a manifold is.

Starting somewhere

Opening quote from Wikipedia:

In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point.

Without knowing exactly what topological and Euclidean spaces are that’s hard to understand. Let’s try simple Wikipedia:

A manifold is a concept from mathematics. Making a manifold is like making a flat map of a sphere (the Earth).

Ok, kinda. Except this is nicely confused by further reading the Wikipedia page which has:

A ball (sphere plus interior) is a 3-manifold with boundary. Its boundary is a sphere, a 2-manifold.

So, to me, this makes no sense of creating a manifold (a flat map) from a manifold (a sphere is a 2-manifold). Why create a manifold if you’ve already got one? (My thinking here is wrong, but I’m trying to explain all my incorrect thinking as I go and clear it up at the end)

Bounds checking

Some kind of useful quotes from the section on boundaries:

A piece of paper is a 2-manifold with a 1-manifold boundary (a line)

A ball is a 3-manifold with a 2-manifold boundary (a sphere)

Show me the money

‘Simple’ examples are usually the way for me out of confusion. The list of manifolds helps a lot:

  1. \mathbb{R}^n are all manifolds, so a line \mathbb{R} is a 1-manifold
  2. A x,y 2D plane \mathbb{R}^2 is a 2-manifold
  3. All spheres \mathbb{S}^n are manifolds… stop there, I’m confused

Spheres

This really gets me. Why do we need to create maps of the world if it’s already a manifold. The simple Wikipedia explanation makes sense, i.e. it’s obvious that we want to create a flat map of the world. It also makes sense that where as a triangle on the globe doesn’t add up to 180 degrees (bad) does nicely add up to 180 degrees on a flat map (good). But why, if the earth is already a manifold do we want to create another manifold from that. Isn’t being a manifold good enough? Some people are never satisfied.

n-manifold

A sphere is a 2-manifold, so although it is a 3D object, close to any one point it looks like a 2D grid.

The important part is the n-manifold. This is what cleared it up for me, you don’t care about creating one manifold from another.

An n-manifold resembles the nth dimension near one point. So a flat map is a 2-manifold and the earth is a 2-manifold because they both represent a 2D grid near one point.

Manifold Hypothesis

In another post from Chris Colah, he talks explicitly about manifolds. But he talks about them assuming that you know what they are:

The manifold hypothesis is that natural data forms lower-dimensional manifolds in its embedding space.

*scream*

Nash equilibria

Although John Nash is most famous for his Nash equilibrium, but it a lot of his most important work was to do with manifolds:

His famous work on the existence of smooth isometric embeddings of Riemannian manifolds into Euclidean space.

Differentiation

Interestingly differentiating a curve at a point on that curve gives you the slope of the flat line at that point. So there’s a connection between differentiation and manifolds.

There’s a Calculus on Manifolds book that looks interesting, taken from this talk on The simple essence of automatic differentiation. Then there is a link back to Chris Olah’s NN/FP post based on this Conal Elliot’s Automatic Differentiation paper.

Euclid forgive me

To be honest even Euclidean space gets me confused. That’s basically just \mathbb{R}^n which I can understand better. But is there a specific reason for using Euclidean? Is there some extra property of Euclidean space that isn’t inherent in \mathbb{R}^n? There’s a fundamental principle of parallel lines not converging in Euclidean space, but then if a manifold is in Euclidean space, how can a sphere be a manifold? It doesn’t explain why we want to convert one manifold into another (as in the spheres section above), or even really know what topology means.

Euclid’s grid

I keep thinking of Euclidean space as basically everything. But it’s not – it’s a n-dimensional grid (with infinite points as it’s the real number line in each direction) and it has straight edges. So a ball isn’t in Euclidean space, or “isn’t Euclidean”, I’m not sure which.

In image analysis applications, one can consider images as functions
on the Euclidean space (plane), sampled on a grid

Geometric deep learning paper

Data manifolds (back to Chris)

Perhaps data manifolds are the structure of the data. For example as above an image data is a 2D Euclidean grid.

They get referred to in the MIT CS231n CNNs course when talking about ReLUs:

(-) Unfortunately, ReLU units can be fragile during training and can “die”. For example, a large gradient flowing through a ReLU neuron could cause the weights to update in such a way that the neuron will never activate on any datapoint again. If this happens, then the gradient flowing through the unit will forever be zero from that point on. That is, the ReLU units can irreversibly die during training since they can get knocked off the data manifold. For example, you may find that as much as 40% of your network can be “dead” (i.e. neurons that never activate across the entire training dataset) if the learning rate is set too high. With a proper setting of the learning rate this is less frequently an issue.

It’s as if data manifold is the useful/accessible part of the data.

What’s not Euclidean

For instance, in social networks,
the characteristics of users can be modeled as signals on the
vertices of the social graph [22]. Sensor networks are graph
models of distributed interconnected sensors, whose readings
are modelled as time-dependent signals on the vertices.

In computer graphics and vision, 3D objects are
modeled as Riemannian manifolds (surfaces) endowed with
properties such as color texture.

Geometric deep learning paper

Manifolds don’t have to Euclidean! But at one particular point on a manifold surface it is approximately a grid in the local area. So this holds if it is Euclidian, because locally on a sheet of paper it’s Euclidean as the sheet of paper is already Euclidean.

But certainly I like the explanation that 3D graphics are manifolds. So there’s nothing special that makes the earth a manifold it’s just one example of one. So I think this means that any 3D shape is a manifold.

So what’s not Euclidean and not a manifold?

What’s not a manifold?

That dear reader… is an exercise for you.

What did this get us?

  1. Euclidean space means a grid (duh)
  2. An image of pixels is a 2D grid – possibly the data manifold that Chris Olah was referring to
  3. A 3D graphic (or the earth) is a 2-manifold. These are not Euclidean but are 2D Euclidean (grid shaped) close to a point on their surface
  4. When your data is 3D graphics – that is your data manifold

Vim Airline Powerline fonts on Fedora, Ubuntu and Windows

N.B. I’ve also answered this on Vi Stack Exchange, but I’m posting it here as it took a lot of work.

This took hours to figure out, so here’s more of a dummies guide for Fedora/Ubuntu, with a special section for Windows.

The first is figuring out what the hell are those strange but nice angle brackets that appear in the vim-airline status bar. The background is that airline is a pure vim version of powerline (which was python), and powerline uses UTF-8 characters to insert those angle brackets. So vim-airline just uses the same UTF-8 characters.

Then even if you do manage to get one installed they look uglier than you’d hope because the fonts don’t fully work.

Configuring Vim

This is opposite to the official instructions but I had this bit wrong at the end which made me question all the font installations. So I suggest you get this configured first and then if you get the fonts working it should magically appear.

The final trick was forcing vim-airline to use the fonts it needs. In the official documentation it should just be adding let g:airline_powerline_fonts = 1 in your .vimrc. However I did this and no luck. There’s more information in :help airline-customization and that gives you some simple config settings that you need, just in case. This was the final magic sauce that I needed. I don’t know why this wasn’t automatically created. This is also mentioned in this Vi Stack Exchange answer.

    if !exists('g:airline_symbols')
        let g:airline_symbols = {}
    endif

    " unicode symbols
    let g:airline_left_sep = '»'
    let g:airline_left_sep = '▶'
    let g:airline_right_sep = '«'
    let g:airline_right_sep = '◀'
    let g:airline_symbols.crypt = '🔒'
    let g:airline_symbols.linenr = '☰'
    let g:airline_symbols.linenr = '␊'
    let g:airline_symbols.linenr = '␤'
    let g:airline_symbols.linenr = '¶'
    let g:airline_symbols.maxlinenr = ''
    let g:airline_symbols.maxlinenr = '㏑'
    let g:airline_symbols.branch = '⎇'
    let g:airline_symbols.paste = 'ρ'
    let g:airline_symbols.paste = 'Þ'
    let g:airline_symbols.paste = '∥'
    let g:airline_symbols.spell = 'Ꞩ'
    let g:airline_symbols.notexists = 'Ɇ'
    let g:airline_symbols.whitespace = 'Ξ'

    " powerline symbols
    let g:airline_left_sep = ''
    let g:airline_left_alt_sep = ''
    let g:airline_right_sep = ''
    let g:airline_right_alt_sep = ''
    let g:airline_symbols.branch = ''
    let g:airline_symbols.readonly = ''
    let g:airline_symbols.linenr = '☰'
    let g:airline_symbols.maxlinenr = ''

Kitchen sinking it on Fedora and Ubuntu

This is probably an overkill solution, but first you need to get it consistently working before you can simplify it.

  1. Install the general powerline font sudo dnf install powerline-fonts (or sudo apt install fonts-powerline) – this should mean that you can use any font you already have installed. If you don’t have an easy way of installing like dnf/apt then there’s instructions for manually doing it e.g. https://www.tecmint.com/powerline-adds-powerful-statuslines-and-prompts-to-vim-and-bash/, also the official documentation has instructions (https://powerline.readthedocs.io/en/latest/installation/linux.html#fonts-installation).

    Now close your terminal re-open and check that the Powerline symbols font is available if you edit the terminal preferences and set a custom font. You don’t want to use the font directly, just check that it’s available. Now try opening Vim and see if you have nice symbols.

  2. If the general powerline font didn’t work or if you’re trying to improve things you can try installing individual ‘patched’ fonts, this took a while to figure out, but you can literally just go to the folder you want in https://github.com/powerline/fonts/ and download it, the font that I’ve liked the most from my tests is the Source Code Pro patched font. Then just open the downloaded font file and click on ‘Install’.

    If you’d rather the command line, you can install all patched fonts:

    $ git clone https://github.com/powerline/fonts.git --depth=1
    $ fonts/install.sh
    $ rm -rf fonts
    

    This will install all the patched mono fonts, but then this gives you a chance to explore the possible fonts. The font list it installs is a pretty awesome list of the available source code fonts. It also means you don’t have to faff around installing each of the individual fonts that get included.

  3. Check that the font can be specified in the terminal preferences, re-open your terminal session if you’re missing fonts, so note there could be two options here:
    1. The general powerline font is working in which case you can just use the base font e.g. DejaVu Sans Mono
    2. If you can’t get that working the patched font that you downloaded above should be correct e.g. the equivalent for DejaVu is ‘DejaVu Sans Mono for Powerline’.

Handling the delicate flower of Windows

The Powerline Fonts doesn’t work with Windows so your only choice is to use a patched font. Also bash script to install all the fonts doesn’t work. This means that on Windows you manually have to go into each of the fonts directories and download all the fonts yourself and install them by opening each one in turn.

I downloaded all of the Source Code Pro patched fonts and installed them. Even though you install them as individual fonts they get added to Windows as a single font ‘Source Code Pro for Powerline’ with a separate attribute to specify the weight.

Then add this to your .vimrc:

set guifont=Source\ Code\ Pro\ for\ Powerline:h15:cANSI

If you want to use the ‘Light’ font use this.

set guifont=Source_Code_Pro_Light:h15:cANSI

It doesn’t make much sense as it doesn’t need to include the ‘for Powerline’, but that’s how it works (I figured it out by setting the font in GVim and then using set guifont? to check what GVim used). Also I spotted that when you use GVim to switch the font, the font rendering isn’t very good. I initially discounted the Light font because when I switched using the GVim menu it rendered badly, but if you put the below into your .vimrc and restart GVim it should look lovely.

Also the nice thing is that you can set your DOS/Powershell prompt to the same font.

Tweaking

Once I actually got it working for the first time, it was really disappointing as the icons didn’t fully match up. But as per the FAQ we need to do some tweaking. I started off with Inconsolata as this gives me a consistent font across Windows and Linux. You can install the general font easily on Ubuntu with apt install fonts-inconsolata This is what I got:

enter image description here

The arrows are too large and are shifted up in an ugly manner.

Then I tried all the other default Ubuntu fonts.

Ubuntu mono:

enter image description here

DejaVu Sans Mono:

enter image description here

This has the vertical position correct but the right hand side arrows have a space after them.

Why you use the patched fonts

Using the default fonts relies on the Powerline font to automatically patch existing fonts. However you can improve the look of the airline symbols by using the patched fonts. These are the equivalents using the patched fonts.

I display these all at font size 16 as I like to use a larger font, plus it shows up minor issues.

Inconsolata for Powerline:

enter image description here

This still has issues, but they are almost all solved by the dz variation.

Inconsolata-dz for Powerline dz:

enter image description here

This has a hairline fracture on the right hand side arrows, but is otherwise perfect.

Ubuntu Mono derivative Powerline Regular:

enter image description here

This still has annoying issues.

DejaVu Sans Mono for Powerline Book:

enter image description here

This has a hairline fracture on the right hand side arrows, but is otherwise perfect. I actually prefer it to the Inconsolata-dz as the LN icon is more readable.

On top of these regulars, I tried almost all the available fonts and my other favourite was Source Code Pro.

Source Code Pro for Powerline Medium

enter image description here

This does have issues at size 16 where the arrows are too big, but at size 14 it’s almost unnoticeable. The branch and LN icons do overflow to the bottom, but somehow this doesn’t annoy me.

Source Code Pro for Powerline Light

enter image description here

This almost completely solves the issues of the medium font’s arrow sizes and makes it about perfect, although there’s still the icon overflow.

Source Code Pro

When I was investigating the options for fonts there’s a couple of things you notice, some font patches have the absolute minimum in details, if you compare this to the Source Code Pro list it’s quite significant. Source Code Pro is a very detailed and complete font that has been considered to work in a large range of scenarios. This kind of completeness matters for edge cases.

Used as a patched font it almost perfectly displays the vim-airline bar. The benefit of so many alternatives is the use of the light font which has an even better display of the vim-airline bar.

Source Code Pro is also under continued open development on Adobe’s Github repository.

Self-driving cars should first replace amateur instead of professional drivers

Photo credit: cheers Prasad Kholkute

Professional drivers, i.e. lorry, bus and taxi drivers are under threat from being replaced by computers. Whilst amateur drivers, all the rest of us, feel no pressure to stop.

I want to lay out the reasons why this is backwards. Amateur drivers should be replaced, whilst professional drivers should have their skills augmented.

Once we reach a level of self driving automation where no humans are needed then this piece is no longer relevant, but there is a lot of scope to avoid human deaths before we reach that point.

The trolley problem is not the issue, the drunken humans who think its fun to go racing the trolleys are the problem.

This gets long, feel free to skip ahead to a section. This is based on a European centric view point, most specifically UK and Belgian roads.

  1. Professional vs amateur
  2. Level 3 automation
  3. Safety
  4. Drinking, texting, calling, dozing, tail gating
  5. Crash vs accident
  6. Bus crashes
  7. Solve all bus crashes
  8. Business cost from crashes on the motorway
  9. Lost time to driving
  10. The cost of the dead
  11. Advanced driving licence
  12. Driving freedom
  13. Difficulties
  14. Simulations
  15. Conclusion

Professional vs amateur

Lorry, taxi and bus drivers are professional drivers. They do it for their living. People pay them to drive.

Lorry drivers are paid because they can reliably and safely deliver large amounts of goods over long distances. Billions of euros, every year.

Bus drivers are responsible for transporting 10 – 50 people around each journey. When you consider the value of the cargo, either in human terms or in absolute money ($5m per person) the responsibility is huge.

Taxi drivers are effectively looking after multi-million dollar cargo. They also have vast local knowledge. A special case of this is the ‘knowledge’ that London taxi drivers must pass. Much of this local knowledge is rendered less important by GPS systems, but still the GPS augments the driver’s knowledge.

These professional drivers, drive all day and in all conditions. They have well maintained cars, and make full use of the cars rather than the cars sitting in garages doing nothing.

Professional drivers are the gold standard of driving styles.

Everyone who is not a paid to drive full time is simply an amateur one. They typically drive for the economic benefit (time saving) or convenience/freedom of having a car.

They typically have people in the car who are personally more valuable than taxi passengers as they are likely family or close friends. But the contrast is that they do not think in these terms. They put a sticker ‘baby on board’ in the back window and assume that will solve the problem.

Level 3 automation

The current level of self-driving for cars such as Tesla and GM, is level 3. This is where the car can drive but requires constant supervision. This flies in the face of two issues:

  1. Humans are good at vision but bad at concentration
  2. Computers are bad at vision but good at concentration

With passive level 2 systems in a car, the human uses their vision and the car concentrates for extreme circumstances. Further it helps the human concentration because it keeps them constantly engaged.

With active level 3 systems, the computer is relied on for its bad vision and the human is no longer constantly engaged but expected to keep their bad concentration.

Further there is a major issue of doubt / delayed reactions. With level 2 systems, if the car detects it is about to crash, it does not doubt the outcome and reacts faster than a human could. With level 3 systems if the human detects that the car is about to crash they are in doubt if the computer will avoid a crash and so delay before doing anything.

Safety

Buses in Belgium are 80% safer than cars (0.4 deaths vs 2 deaths per million passenger miles in 2015). But the figure for cars looks better than it is because it includes taxis. Taxis are do many more miles than amateur drivers and are less likely to crash.

In fact the number of bus crashes are so low that we can almost look at the individual cases for the bus crashes. Also note that a single bus crash can kill up to 30 people, so the number of deadly accidents is even fewer.

Lorry, buses and taxis drive more safely. They avoid all the typical causes of crashes:

  • Alcohol
  • Tiredness
  • Texting
  • Calling
  • Speeding
  • Going through red lights
  • Driving erratically
  • Having poor eye sight
  • Driving too close
  • Not changing driving style to bad weather conditions
  • Calmness in an accident
  • Driving without insurance
  • Driving without a licence

Tiredness can be an issue for lorry drivers, but there are strict rules. Automated cars also have these attributes but they don’t have human level performance to handle all situations.

This safety of passengers is the potential route to ending traffic deaths. Once you can look at individual crashes similar as with air crashes then the cause of the crash can be properly understood and recorded. When the number is so many as now aggregate statistics have to be used which will never get the number of deaths down to zero.

Professional drivers can be augmented by the automated driving. Each crash can be analysed and added to the training for the automated car. This means that automated cars will have more knowledge in extreme cases but less in every day. On top of that automated cars can react quicker and without emotion to handle a car that is out of control. Automated systems can be taught to drive at the very limit of the frictional ability of the tires combined with weather conditions.

Drinking, texting, calling, dozing, tail gating

These are the fundamental problems of amateur drivers. Whilst they are warned about the consequences, the chances are always very low of having an accident so there is always the temptation to drive when incapacitated in some way. There is no standard test that can be enforced on drivers before starting. There is some efforts to put in breathalysers in cars, but the chances are so low that these changes will not get to zero deaths. The only stories I have heard about this are repeat drink-drive offenders and taxi drivers. But this still only stops one aspect, drinking. All the other failures of human drivers have no acceptable solutions.

Crash vs accident

When trying to focus on the cost of human life vs the cost of professional drivers, the language needs to change to focus on that practically all crashes are avoidable, they are not accidents.

From this CityLab article when talking about road deaths framing it in terms of crashes rather than accidents gives focus to the causes. Each crash should be treated in terms of an air crash. There are no air accidents, and to get down to zero road deaths, there can be no accidents. An accident isn’t necessarily avoidable, but a crash is.

Bus crashes

Aspects of bus crashes in Belgium are down to single figure deaths per year. When the figures are this low we can consider each case individually. These are some of the causes:

  • Driver become unwell such as a heart attack
  • A tyre blows out on a motorway cross over and drives off the side

Both these cases would be better handled with automated assistance.

If the driver takes their hands off the wheel the computer can take over and bring the bus to a stop at the side of the road.

If an extreme event such as a tyre blow out or hitting a large patch of ice, the computer is able to be better trained. Simulations can be replayed millions of times and the raw physics of these situations can be well analysed.

Solve all bus crashes

Each of the situations that caused a fatal bus crash can be analysed and simulated. Then further with simulations the environment can be altered to train the car on other similar situations.

Aircraft pilots train in a similar way. This is the best possible circumstances for training drivers. There is such a long history of bus crashes and the numbers are already so low that there is a realistic chance of preventing all bus deaths across Europe.

But all of this is humans being augmented by computers. They are treated as the safety belt, there to catch exceptional circumstances. All the while computers can be learning from the bus drivers. Especially if the bus drivers have taken their advanced driving test then computers are learning the most consistent and safe method of driving.

Business cost from crashes on the motorway

One of the potential but more extreme solutions is to ban amateur drivers from the motorway. It could be weakened to allow amateur drivers with an advanced license on the motorway.

One simple argument for this is the business cost of the delays that crashes cause. One crash delays thousands of people and lorries. The ring around Antwerp is a major example.

Reducing road deaths on the motorway to zero is within reach. Reducing the bad drivers on the motorway has a multiplying effect. It takes two bad drivers to crash. One who makes the mistake and the other that is too close.

There are some major issues with this solution:

  • It will force bad drivers off the motorway and onto the normal roads. This will increase the death rate off the motorway
  • Policing this will be a problem. A simple possibility is to have a letter in the car windscreen for all those with an advanced licence. Effectively a reverse learner sticker

Lost time to driving

Professional drivers lose no time when driving a car, bus or lorry. There is no other work. They lose the time driving to their work. Effectively during this period they are amateurs too.

But all amateur drivers are losing time. Perhaps it is more pleasurable than being stuck on a train. But it’s wasted time.

The first area where level 4 (fully autonomous) vehicles will become a reality will be on motorways. It would be possible to allow self-driving whilst on a motorway but switch it off once the GPS detects that the car has left the motorway. This would cut down on motorway deaths, save businesses money from less delays and give more free time to commuters.

The cost of the dead

Those who die in car crashes are a specially tragic kind of death. They almost always have nothing wrong with them. They are mentally and physically healthy. They are also often young and espcially cruel when it is young children, for example pedestrians.

The economic cost is put at $5m per person based on the amount of output that an average person can have. But the wasted effort is bigger when the person is killed younger. All the training and education has been given but they never get a chance to repay that back into society.

But that does not take into account of the destroyed lives of the families of the victims. Parents who lose their children, siblings who lose their brothers and sisters, children who lose their parents. Further the economic cost of the victims families who struggle to work for years after the death.

Why do people not feel/see this pain? We stick horror photos of smokers on packets, but nothing of the crash victims on cars.

The biggest insult is that of drink drivers. The thought of having your child killed by a drunk driver. Death by human stupidity. Not someone evil just someone stupid.

No other amateurs can do such damage. Professionals practically never do this. There will be cases, but the cases are so few that it is at a level at which no more can be done.

Advanced driving licence

One potential improvement is to increase the requirements of the driving tests. In the UK there is the advanced driving test, which tests candidates in many extreme situations as well as increased road safety and traffic awareness. The insurance premium for advanced drivers is lower.

A further possibility would be to require new driving tests every 10 years and requiring the level of advanced driver.

These tests should also be mandatory for professional drivers but the hope is though that it will be easier for them to pass and they will get the added benefit of lower insurance premiums when driving for themselves.

This will smooth the transition to automated cars as humans will behave at a level that is closer to professional drivers.

Driving freedom

The major issue with restricting amateur drivers is the freedom and reliance we have on cars.

If we were to restrict who can go on motorways it would restrict poorer people unfairly. They cannot afford to buy expensive automated cars.

Is there research of social status with driving deaths?

But we are not completely restricting driving just on motorways for those without an advanced licence. So the freedom is still there just a bit slower.

Plus retaking your licence every 10 years.

But it is restricting a freedom that exists now. People are happy to accept the deaths for the freedom.

But commuters don’t need the freedom. Take it away from the rich with their business cars with cheap taxes. Commuters could take taxis and buses. Introduce toll roads, the French payage system is perfect. Hike the prices during commuter times but make exceptions for professional drivers.

Difficulties

There was a bus lane in London that received a lot of complaints and it was eventually stopped because it was too much of a political issue. In Belgium, bus lanes still exist in the slow lane. But it could be a professional lane which means lorries as well. This means that the lane would be fully used.

It would be very unpopular though.

The ultimate is to prevent amateur drivers from using the motorway. But you can’t prevent foreign drivers. You could restrict them to lorry speeds unless they have a pass. The car pool lane, but now the professional or advanced drivers lane.

Bringing in an advanced test and also a 10 yearly driving test. Then you can force up the level of driving. But then the whole population must do a driving test. The system can’t cope now. There’s not enough driving instructors.

Simulations

The automated test could be increased. Along with sight and danger tests.

Also a focus on time, keeping the 3 second rule, increasing it in rain and ice.

We have a limitation of driving instructors, but driving simulations can be drastically improved. There are very realistic driving simulation games with highly accurate physics. These games are played by children but with no emphasis on them being a useful tool.

This is how pilots are trained, they do thousands of hours in a simulator, repeatedly learning disaster scenarios.

All drivers could be put through multiple simulations of crash scenarios or taking on a skid pan as would happen with a regular advance driving test. They could drive around virtual driving cones until such time as they can master it. Mastery of the situation should be the key.

Currently it is very costly to take a driving theory test. They could be made harder but made a fixed price that allows as many retakes as required. This is a similar concept as put forward for learning with Khan Academy, mastery is the importance, getting 100% on the test whilst allowing as many retakes as required.

A further benefit is that the data collected by the driving simulations can be used to train the AIs. To see how a human handles a crash and also gain insight into the multiple attempts at the same crash and see which methods can be used to avoid a crash.

Conclusion

The focus should be on improving and replacing amateur drivers whilst augmenting professional drivers. This is a highly unrealistic hope as the focus for self-driving car companies now is simply to cut the human cost of drivers. But the human cost of death should be regarded as a higher priority, with a requirement for public policy to intervene.

On the power of netbooks and laptops over Android/iOS

iOS and Android turn tablets into oversized phones, so no surprise they lose against phones – they have the same (or usually worse, at a given price point) capabilities while being larger, thus less convenient to carry and more fragile.

TeMPOraL on HN

Windows did try the same as Android and iOS with Windows RT, thankfully that was a disaster. Certainly that’s one of the bad points about iOS and Android they are so locked down that you have to jump through hoops if you wanted to use them as a work machine. You have almost no access to the file system and you have to pay iPad pro levels of money to get the novelty of having windows side-by-side.

Netbooks are ridiculously useful, I used to have a 15 minute bus ride to work with a 12″ Asus EEE and would manage to fill that 15 minutes with active development time every day. The work I did on that bus became the frontend for what now 10 years later is a $50m company. On the other end of the scale I spent weeks with my then 7 year-old nephew creating stop-motion animations using the same netbook.

For my current job I bought myself a $350 refurbished Thinkpad (T430, 8GB RAM, SSD, core i5), this brings me in all my income. You can compare that to people that pay $1000 for an iPhone X because they get bored of their iPhone 8.

The possible drawback is that a Thinkpad doesn’t have a touchscreen. But with my experiment of buying a laptop with a touch screen I found I pretty much never wanted to use the touch screen, it’s a slower interface than keyboard and mouse. You want the screen in front of you at arms length but then you have to reach with your arm to touch the screen.

I bought exactly the same spec Thinkpad for my 5 year old daughter. The Thinkpad T-series are great because you can pour a litre of liquid over them without problem [0] plus they’re built like a brick, so basically perfect for kids. My daughter immediately covered the grey brick with shiny stickers and gave it a name, ‘Fiona’. In theory Fiona has the full capability to do everything my daughter will ever need for the rest of her school years; I don’t imagine a massive shift away from laptops in schools for the next 15 years. Further to that Fiona’s got Ubuntu installed and I can then install Sugar [1] on top (the same software used for One Laptop Per Child [4]).

I can now teach her over the years what it means to have real freedom with your software and hardware.

P.S. I posted an original version of this on HN [3]

Yet another monad explanation

Photo credit: cheers @laimagendelmundo

tl;dr Here’s a really short explanation for JavaScript, as in just the flatmap part.

map is pretty well understood in JavaScript (and I’m assuming you understand it).

So you ‘just’ need to make the leap to flatmap. Which is mapping something and flattening the result.

Flattening a JavaScript array is concatenating a 2D array into an array.

Longer Python example

Another attempt at explaining monads, using just Python lists and the map function. I fully accept this isn’t a full explanation, but I hope it gets at the core concepts.

I got the basis of this from a funfunfunction video on Monads and the Learn You A Haskell chapter ‘For a Few Monads More’. I highly recommend watching the funfunfunction video.

At it’s very simplest, Monads are objects that have a map and flatMap functions (bind in Haskell). There are some extra required properties, but these are the core ones.

flatMap ‘flattens’ the output of map, for lists this just concatenates the values of the list e.g.

concat([[1], [4], [9]]) = [1, 4, 9]

So in Python we can very basically implement a list Monad with just these two functions:

# helper function as python doesn't have concat
def concat(lst):
    return sum(lst, [])

# monad magic
def flatMap(func, lst):
    return concat(map(func, lst))

func is any function that takes a value and returns a list e.g.

lambda x: [x*x]

Explanation

For clarity I created the concat method in Python via a simple function, which sums the lists i.e. [] + [1] + [4] + [9] = [1, 4, 9] (Haskell has a native concat method).

I’m assuming you know what the map function is e.g.:

>>> list(map(lambda x: [x*x], [1,2,3]))
[[1], [4], [9]]

Flattening is the key concept of Monads and for each object which is a Monad this flattening allows you to get at the value that is wrapped in the Monad.

Now we can call:

>>> flatMap(lambda x: [x*x], [1,2,3])
[1, 4, 9]

This lambda is taking a value x and putting it into a list. A monad works with any function that goes from a value to a type of the monad, so a list in this case.

That’s your list monad defined.

You can now compare this to a python list comprehension:

>>> [x*x for x in [1,2,3]]
[1, 4, 9]

More explanation, time to bring out the Haskell

Other examples that aren’t lists are JavaScript Promises, which have the then method and JavaScript Streams which have a flatMap method.

So Promises and Streams use a slightly different function which flattens out a Stream or a Promise and returns the value from within.

The Haskell list monad has the following definition:

instance Monad [] where  
    return x = [x]  
    xs >>= f = concat (map f xs)  
    fail _ = [] 

i.e. there are three functions return (not to be confused with return in most other languages), >>= (the flatMap) and fail.

Hopefully you can see the similarity between:

xs >>= f = concat (map f xs)

and:

def flatMap(f, xs):
    return concat(map(f, xs))

view raw

monad.md

hosted with ❤ by GitHub

Password randomness and the UX of passwords

I’ve been having a look at passwords again as the WooCommerce/WordPress password strength meter has been causing problems.

The password meter actually likes the method popularised by XKCD – which assuming random words seems to have had it’s maths checked and re-checked and is based on a lower bound assumption (worst case scenario) that someone knows that you are using that method – is still a very good method.

i.e. ‘correct horse battery staple’ (550 years to crack) vs ‘Tr0ub4dor&3’ (3 days to crack).

It’s just the random bit in the XKCD definition which needs to be repeated to people again and again.

Also don’t forget the spaces – as even Bruce Schneier and an Ars article on password cracking ignore this. You can use dashes/underscores instead as some places (I’m looking at you Microsoft) refuse spaces. They’re handy extra bits of entropy for no extra (human) memory. We’re talking about exponentially increasing the length of time with each bit of entropy.

Randomness

One of the attacks mentioned in the Ars article talks about specifically targetting the XKCD method where two random long strings from two dictionaries are put together.

“Steube was able to crack “momof3g8kids” because he had “momof3g” in his 111 million dict and “8kids” in a smaller dict.”

The problems you hit are if someone else has used the same four words and their password gets hacked. Or if two halves of the password you select are commonly used.

The problem comes that people pick their own words and don’t generate random ones. And humans are more likely to pick words that other humans pick.

So what can be done for people to select random easily remembered words?

The simplest way is to add a suggestion of randomly created words as their password, using for example passphra.se. I’ve had a look at the source code to it and it the randomness of the selection seems to be pretty comprehensive, but I’m not a security expert.

However what I like about passphrase is that you can just use the example as a ‘seed’ for your password. Then you can tailor it slightly from the output to more relevant words for you.

How important is randomness?

What I see the point of the XKCD method being is to raise the bar that the weakest people choose.

We’re not talking about the passwords that security experts should use, we’re talking about regular people who don’t care about security.

I think even the inbuilt Firefox/Chrome password manager locked with an xkcd password is great for normal users based on this Super User answer. Even if they don’t have a password to lock the password manager – it’s still better that they’re using more secure passwords, it moves the point of weakness to their password manager which requires much more personal attacks.

Possible unproven minor improvements

To try and work with the kind of passwords that the weakest people will use. As per the XKCD, we want to produce passwords that are hard for computers to guess but easy for people to remember.

Here I’m assuming that someone doesn’t want to choose a properly random set of words. Are there words that people can think of that will be inherently more random?

I think that local slang is a good way of choosing words. Every community will have their own words – often unwritten, so no common spelling. Anyone who’s read an Irving Welsh novel (Train Spotting) will know some of the glorious Scottish slang that he writes. This means your source material gets more obscure, so less and less likely that it’s in a dictionary somewhere.

But obviously those examples are still written down and can be included into dictionaries.

What about the rather silly porn star names? e.g. first pet + street you grew up on / middle name.

You need words that are definitely obscure, but relevant to you.

Changing your password

Also what I like about the XKCD method is that for those who are force to change their work password every 90 days you can change one of the middle words (to another randomly chosen one). This only makes a minor change to the remembering but avoids the trick that the password crackers use here which is to cut off the last 4 characters and try all possible random sequences.

Keep it simple

I’ve also seen people suggesting that you should combine upper and lower and symbols with the XKCD passwords. But from what I understand that’s missing the point. Security minded developers keep wanting to make the words more complex – but that always makes it harder to remember. The point of lower case with spaces is that it looks completely natural and there is nothing else to remember. You just hold the image of what the four random words are in your head. You don’t have to remember the four words and then try thinking what kind of substitution did you do to those words. XKCD picks up on this from the hover text of the cartoon:

To anyone who understands information theory and security and is in an infuriating argument with someone who does not (possibly involving mixed case), I sincerely apologize.