Translating Sublime Text into Vim

Intro

Coming from a Windows world getting into Vim, to me is almost exactly like the struggles I had learning French or Dutch. I spent 10 years learning French growing up and I can’t speak a proper sentence. I then moved from England to the Dutch speaking part of Belgium (Flanders) and I learnt to speak Dutch to a conversational level within 2 years.

If you’re going to learn Vim you need to immerse yourself in it. I suspect the majority of Vim users only ever use it to make minor file modifications via SSH. That’s what I did anyway.

I’ve used lots of editors in Windows but the one I prefer now is Sublime Text (ST). However ST has almost all the exact same commands as other editors, with the one major improvement which is Ctrl+P, we’ll come to that later. ST is free to use with a popup once in a while, its a great tool, you should buy a licence.

So for users of all other editors, all you have to do is learn the elements of Sublime Text I use here and then you should be able to translate them to your own editor. I hear you notepad lovers. So we’ll use ST as the boundary layer between our nice fuzzy Ctrl + NCtrl + CCtrl + V and :ey and P.

Why O Why

Is it worth the pain? I have spent in excess of 100 hours doing nothing but learning Vim and getting it set up the way I want.

I mean, the question ‘How to close Vim?’ has a 1000+ up votes on Stack Overflow. That’s insanity.

However, I think that if you master Vim the layer between your thought and your code becomes thinner. So this has nothing to do with linting or plugins this has to do with performing at a higher level with the code that you write.

Also I think Vim is just misunderstood. This is where my analogy of learning a spoken language comes from. Switching between Windows editors is like switching dialects sure some of the Scottish folk sound funny, but you can understand what they say. All of us assume that Vim is just another dialect, but it’s not. It’s like nothing you’ve used before. So there’s nothing for your brain to grab on to and understand.

  1. Vim gets you closer to your code. Once performant in Vim you can perform code editing tasks faster and keep up with the speed of your thought. This breaks through the ceiling that you will hit with most other GUI editors.
  2. Vim is very fast, I don’t think it’s necessarily faster than ST, but certainly it’s at that level, everything happens instantly. There’s none of the delay that you sometimes have with opening ST or other heavier editors e.g. Netbeans, IntelliJ. Speed is one of the barriers between your thought and your code, slow editors are slowing you down.
  3. Vim is ‘hyper’ cross-platform (Windows, Mac, Linux, SSH, Docker, Browser (via WASM), Android, Amiga …) and works via the command line, every benefit you learn on Windows means that you have the multiplied benefit that you can use the exact same instructions on Linux or on Mac. Again so it ST, but Vim works via SSH, it works in Docker, it works everywhere.
  4. Once you learn the commands you can do things quicker, deleting a word is just typing dw, once fluent this can be performed faster than using the mouse or Ctrl+Shift+Right-arrow, Delete. These are small 1% improvements, but they add up.
  5. Vim has a command history. This is really useful for doing repeated things. Sure your search box in other editors has it plus you’ll have recent files that you’ve opened, but every single command that you type is recorded. My example for this is reformatting code with Regex. Once you’ve closed your regular editor your search and replace history is lost. In Vim it’s there waiting for you.
  6. Not only that but anything you did as the last command can be repeated with .. This can be complex things like repeating all the text you just typed in Insert mode. Or if you just cut a line and you want to paste it a few times, now you’re just typing . instead of Ctrl + V.
  7. Syntax highlighting! In the DOS prompt, SSH prompt! Seriously, this is amazing. Windows has been around for 30 years and there’s nothing else I know of that can give you this.
  8. Less Chrome. Vim is mostly like using the distration free mode of Sublime Text all the time. Less distraction, more thinking.
  9. Everything you’ve got in your editor currently: Tabs, Split screen, Projects, Linting, REPL, Plugins, Sidebar file tree. But we’re still in the donkey DOS prompt here.
  10. Closer to the command line. Again thinning that barrier between you and your code. In Vim you can type ! and then any command line command, e.g. !mkdir temp or !python will allow you to drop into the python REPL and then come straight back to Vim once you’re done.
  11. Vim’s buffers, which are the equivalent of tabs in other editors, are amazing. When you have a regular editor open you’ll typically have 10 or so tabs open, or at least that’s what I had as otherwise it becomes too crammed. With Vim you just keep opening all the files you want into buffers. I regularly now work with 100 buffers open, but then I can very easily switch between them – :b [part of file name] then <Tab> and you switch to the other file, if you have more than one file open with that bit of the name then you just tab through the list, e.g. :b Controller will allow you to tab through all the *Controller* files (buffers) that you have open.
  12. Not strictly Vim itself, but it has excellent integration with FZF and Ripgrep, which are Rust commandline tools for fuzzy file finding and ‘find in files’. These tools are ridiculously fast. Having a fuzzy file finder means that you don’t need the folder structure on the left any more. Ripgrep works better on Linux but in any place it will churn through GB of source code. Also once you have the search results you can do more with them, they open up in a standard Vim ‘window’ and so you can search/highlight in your search, can then also run search/replaces on the list that you get back.
  13. Vim sessions are what allow Vim to work in a similar way to Sublime Text in that you can save all the open files that you had when you close Vim and open up exactly where you were last time.
  14. But Vim sessions are really flexible, the one great thing I’ve found about them is that I can combine all the projects that I’m working on into one. My colleagues use various other IDEs and we have a set of projects each with their own git repo and docker container. My colleagues need to switch projects each time they want to look at code in one project. However I can put all the repos in one folder and then create my Vim session above all of them. Then FZF can find any files amongst them, Ripgrep can search through all of them at the same time. So it means I can jump-to-definition across any project that I have.
  15. Combining all you do with other tools in one. Here’s a few things that I now do in Vim that I used to use other tools for: file diffs, git diffs, subversion diffs, todo lists, database connections/commands, git conflicts, subversion conflicts. This is not quite a case of Emacs where you never need to leave it again, but all my development tools work perfectly inside Vim, so I can use the power of the various commands I’ve learnt in Vim across these other tools
  16. Git diffs, this is a surprising one, but once you start using Fugitive plugin doing a side by side diff is easy and comes with nice syntax highlighting
  17. Git conflicts are handled beautifully with the Fugitive plugin, the majority of developers that I know only know how to use SourceTree or the output from Bitbucket diffs. With Fugitive you can do a 3-way vertical diff (see the Vimcast on Git conflicts), so you have the conflicted file in the middle with the two files you want to merge either side. It is the nicest way possible to do a merge. Even the GUI tools that I’ve seen that do do a 3-way merge are pretty ugly. Meld is quite good one for Linux and Windows, but it’s not fully supported on Mac, but this suffers from being slow. In Vim everything is fast and again I’ve got all my Vim tools handy as the diff windows are just Vim windows.
  18. Todo lists is a simple one – but you have things like Org mode in Emacs that you can replicate in Vim, but for the most part Markdown does everything you need.
  19. Database connections are always done in a special application. The main one I’ve used is SQL Server Management Studio (SSMS) – but of course that only works for SQL Server. If you work with MySQL either you need to use things like PHPMyAdmin or just use the MySQL command line, there are sometimes closed source tools for connecting to various databases but I’ve never particularly liked one. Tim Pope recently created the dadbod plugin that allows you to connect and run commands on all the major databases. This means that like SSMS I can have my SQL file open with syntax highlighting but then I can highlight a few lines and run those. This is super powerful, you then of course get all the query results in a Vim window and can use all the regular commands to search that and copy paste text from there. I still regard SSMS as the most powerful SQL editor that I used, but now I can have the majority of the functionality that I used there but for any database. I don’t have the things like query optimisations, but it’s rare that I need that.
  20. Making a tailored editor… typically all you do with other editors is install a few plugins. With Vim it’s expected that you’ll customise almost everything. People with ST share their list of plugins, where as people with Vim share their .vimrc file which contains all their plugins and all their settings. It’s the difference between an off the peg suit and a tailored suit, other people might not see the difference but you will feel it. You create Vim exactly as you want it.
  21. Made by individuals…
  22. Fully free and open source, it’s inspired a whole bunch of new editors – neovim, gonvim, AMP…
  23. Touch typing becomes more important. Once you use the keys for everything then you encourage yourself to touchtype more. This adds benefits to your coding. And as Joel Spolsky says, fast accurate typing is one of the fundamentals of a developer. I’m still not great at this but using Vim is helping me to improve.
  24. Split windows are something that I never bothered with in ST, but recently they’ve become very useful. When I’m trying to implement a new feature based on someone elses code I find it useful to have a side-by-side view of the two files. Further I can have the main code I’m working on in one window and then search throughout the code in the other window. Again you can do things like this in ST but I never really started doing this until I got used to Vim and Vim split windows.

Lesson 1: Install GVim

GVim is by far the best way to get introduced to Vim, it is a much more standardised way of using Vim rather than starting in the terminal and hitting problems. I really want to encourage people to try using Vim in the DOS prompt just because it’s amazing to finally see it there but for anyone starting just use GVim. I still use GVim on Windows as there’s still a frustrating slowness to editing in the DOS prompt but almost all my other gripes with it have disappeared over the last two years – the Windows team changing it are doing an amazing job.

Nevertheless, we’ll start with GVim, as well as being more consistent it allows for discovery as it has a lot of common menu commands at top that typically say what the commands are so that you can slowly familiarise yourself with it.

I suggest installing GVim via Chocolatey, or otherwise you can just download it and install it from the vim.org site (that’s all Chocolatey does behind the scenes).

Hopefully it also means I can help more people, Powershell users can probably translate the DOS commands more easily to Powershell than vice-versa. Linux and Mac users used to using GUI tools should be able to figure it out too. When I write Ctrl + C people will understand, when I write <C-c> users unfamiliar with Vim / Emacs will stare blankly.

Install Vim in DOS (not required)

If you’ve installed GVim then this also installs a command line version of GVim. The good part of this is that it comes with the most recent version of Vim – currently 8.1. There are some very nice things that have been added in the most recent version that improves the colour handling inside the Windows 10 DOS prompt.

Add C:\Program Files\GVim\bin to your PATH.

I love using Vim inside the DOS prompt. I think it is the simplest, purest way of using Vim in Windows.

Vim 7.4 also comes with Git for Windows. You can install this via Chocolatey, or just via the Git website.

> choco install -y git

We then need to add the GNU usr tools to our PATH – add C:\Program Files\Git\usr\bin to your PATH.

This gives you all the loveliness of all the GNU tools e.g., lsgrep as well. If you really want to do yourself a favour install clink and solarized DOS prompt colours too.

Lesson 2: Basic commands

You can skip this if you know the commands. I knew the basics of these for years before I started immersing myself in the rest of Vim.

Inserting code

You go into the INSERT mode by hitting the i key and switch back to NORMAL mode by hitting escape.

Once you’re in edit mode then it’s fairly similar to other editors, you can move left, right, up, down with the arrow keys, then just type and delete stuff with the backspace or delete keys.

Initially to be more familiar with other non-modal editors most users will spend all their time in INSERT mode. I personally think there is nothing wrong with this and this is exactly what I did to be as productive as possible in the beginning.

To be more productive though, it is necessary to learn the other Vim commands, otherwise you’re just taking away all the other features that you’re used to in ST which almost all do exist in Vim, just that they’re more hidden or you need to install a plugin for it.

Searching / moving code

A lot of the Ctrl + ... commands that you expect from other editors are handled in Vim’s NORMAL mode – you should see the word NORMAL in the bottom left-hand corner.

This is the weirdest part of Vim, that you delete words via three or four letter commands.

CommandSublime TextVim
UndoCtrl+zu
RedoCtrl+yCtrl+r
First lineCtrl+HomeCtrl+Home / gg
Last lineCtrl+EndCtrl+End / G
Line NCtrl+gNEnterNgg
End of the lineEndEnd / $
Start of the lineHomeHome / 0
Next wordCtrl+Rightw
Previous wordCtrl+Leftb
Page upPg UpCtrl+u
Page downPg DnCtrl+d
FindCtrl+f[text]Enter (forward)/[regex] (forward) / ?[regex] (back)
ReplaceCtrl+h[search][replace]Enter (forward):s/[search]/[replace]/Enter (line) / :%s/[search]/[replace]/Enter (global)

I actually practiced the commands by installing an Android app with Vim commands and the beginnger free part of shortcutFoo Vim.

After those commands the next most important one is :. This is the most common way of starting the command line typing at the bottom. It’s similar to when you type Ctrl + P into ST.

The first command to type is :help, this shows the first cool thing of Vim – split windows as standard.

The weirdest concept I had (after years of very light usage) is typing :w instead of :x to write the file, because now we actually want to stay in Vim, rather than get the hell out as fast as possible.

Jesus for Atheists

I guess this is not much more than a review of Leo Tolstoy’s book ‘A Confession’ and his other books on religion. I’ve been an atheist as long as I felt that the words in the Lord’s Prayer were ridiculous. I believe the Bible is a work of fiction like any other book that talks of dragons and giants. I find it crazy that rational people can believe that. I find the hypocrisy of the Bishops who abused children and the cover up of the Church unforgivable – but I guess many more worse things have been done in the name of Christ. Greek and Roman Gods are seen as fake but the one dreamt up 500 years later is somehow real. If there’s only one God then only one of Christianity, Islam, Buddhism or any other religion can be right and the rest of us are damned.

I believe in Science. Lots of things can’t be explained by science yet, but science doesn’t pretend to know things it doesn’t. It remains humble it never believes you get further than a theory, even if you work your whole life on it. Science knows that what we know today is partially right but still very much wrong. Science has it’s problems and the people that abuse it, but it is explainable and doesn’t require tales of fiction to back it up. If Science is right we’re all saved from the hellish after lives, but if science is right there is no after life.

But at the same time, I’m doing like all other thinking people and searching for meaning in life. My father believed that the basic rules of Christianity were good ones to live by, but nothing more. I wonder at the cathedrals built by people looking to worship God. I wonder at the vast amount of good work that Christians do. I wonder at the happiness that my Grandmother had from just doing the flowers and helping at the church. Christians may believe in something fake but their happiness is real. Their good deeds are real. But I can’t ever join that because I know it’s fake. I can’t eat the body of Christ and believe that it’s really his body, or drink his blood and really think it’s his blood and not be put off by the rather disgusting idea of cannibalising God. I can’t bow down to an almighty nothing.

Tolstoy’s Confession is basically talking about the same. But he did much worse things in his life. He killed people in duels, killed people in war. I don’t have the direct blood of anyone on my hands. Just the blood of all the animals that have been killed for me to eat and to safely take the medicines I take. He did all that I have done but much more, he spoke to the great scientists of his day, he knew the Orthodox Church very well, read up way more than me on Buddhism and Islam. He read all that he could searching for meaning. He read all the philosophers he could all the poets, looking for something or anything with meaning.

This is the closest that I have come to someone who feels as I feel. He saw how clearly that science showed that the teachings in the Bible were fake. But his answer was the biggest crush to any hopes I had. He simply put his faith in Christianity but one that ignored all the clearly made up stories but left open the real stories and teachings of Jesus together with the existence of God. Of course the stupid thing is that in my search for the meaning of life, basically the only answer to that is religion. Religion is just all the different ways that people have tried to create meaning in life. So Tolstoy simply returned to Christianity, the Christianity of the poor where a simple, honest life gets rewarded in this life, and the next.

But this then leaves me nowhere. You get to the top of the tower searching for the answer, you open the trapdoor at the top and there’s nothing but emptiness. The meaning of life is religion and there is no religion, so there’s no meaning. Humanists and other atheists try to claim that there still is meaning, that you can lead a good life and that this one life becomes more precious. But that’s not how I feel and it’s not how Tolstoy clearly lays it out.

So this is what I understand from ‘A Confession’. Life is either finite or infinite. Scientists believe it’s finite, but God and the Universe is infinite. Religion simply matches up our finite life with an infinite afterlife. Science can never explain this like we can never count all the way to infinity. So if science is right, then us, our children, our grandchildren and all the people and animals and insects and bacteria that ever live on this planet ever are pointless. Without meaning to be forgotten. We’ve been going for 14 billion years and in 1,000 billion years everything will be gone. An infinite of nothing. That’s it there’s no way round it, there’s no hope that science will find an answer. There’s no experiment to trial, no thought experiment to see it more clearly. Newton and Einstein won’t save us. They’ll be gone and forgotten, so will we.

Tolstoy wrote that all you could do if you believed there was no God, was to either kill yourself or limp on like a coward.

But Tolstoy carries on about what his version of the real Christianity should be. He views it above all other religions because it puts love as the only rule. “Love thy neighbour” is the only commandment. Thou shalt not kill isn’t needed if you love the one you want to kill. But this is not the jealous love that we think of where a man murders his wife because he loves her too much. The love taught by Jesus is a love that conquers all. You must simply turn the other cheek if someone strikes you. The law of love comes before the law of violence. There is no space for violence, bullying, hatred, jealousy. The law of love is available to all, you don’t need money, talent, beauty, fortune, family. You just need to love those around you.

I understand this kind of Christianity. I can understand why it is so powerful. There’s nothing to laugh at when you see the power of what love can do. The impact that Ghandi managed to have through getting independence for India without resorting to war. That was love. I don’t see humanists or atheists talking about love when they talk about the meaning of life. It’s just a small part in what they believe. I also agree with how he explains that Christianity has more meaning than Paganism because it puts love as the one and only law.

The only trouble I had with Tolstoy’s discussions about religion was the conflict between “non-resistance to violence” and “non-violent resistance”. You can’t resist if you believe in non-resistance. But I’m happy to live with that conflict. I’d see it as never using violence to resist, but if someone is violent to you then you don’t resist them. But you can resist them as long as they don’t turn to violence.

This love at the centre of Christianity is something that I’ve never heard before. “Thou shalt not kill” comes up, “forgive us our tresspasses” and so forth. Jesus talks about love, but I’ve never heard it expressed that love is the only thing that matters. But what I see is that this line of thinking is open to all. You can be an atheist but follow the guidance of Jesus and put love above all else. You can love your family more, love your partner more, love your children more. Forgive them for when they are angry, apologise for the things you say and do. You can love your fellow neighbour and check how they are doing and what you can do for them.

I’m not saying I do these things, but I see it as perfectly valid that an atheist can follow the teachings of Jesus where he speaks about love. There is no church required. There is no afterlife required. But if you do choose to dedicate your life to loving others it will be a happy one. You can believe in science, think God is a fraud, limp cowardly forward without meaning, but still find happiness with Jesus and Love.

Headspace is better than coffee

More generically this would be meditation is better than coffee, but the only meditation I do is via Headspace.

My central point is that it’s better to give up on coffee and replace it with meditation. I’ve been drinking a lot of coffee for years. I guess 4-5 mugs of coffee per day plus at least one mug of tea. But during these years my ability to concentrate has been terrible. So I’ve been slowly ever increasing the amount of tea and coffee. I do IT and avoid any kind of management – so the only thing I rely on is concentration for long periods.

But I can’t concentrate, so I need to change something. The first article that got me thinking was the BBC health article on waking up earlier:

  • Wake up 2-3 hours earlier than usual and get plenty of outdoor light in the morning
  • Eat breakfast as soon as possible
  • Exercise only in the morning
  • Have lunch at the same time every day and eat nothing after 19:00
  • Banish caffeine after 15:00
  • Have no naps after 16:00
  • Go to bed 2-3 hours earlier than usual and limit light in the evenings
  • Maintain the same sleep and wake times every day

The one that was easiest to implement was “Banish caffeine after 15:00” – so that was easy enough to stop coffee after 3. The next trigger was a Quora post saying that when waking up we need water and not coffee to help our kidneys get going.

For kidneys is very important that you go sleep on time, before 10pm if possible, and that you do a lot of exercises in the morning or afternoon (but not in the evening, our body should rest at that time of the day). Avoid any stimulants like coffee and alcohol which directly influence on our kidneys, making them dry and slow. When you wake up the first thing you should do is drink loads of warm water, this will warm up your kidneys and put them in work.

Obviously this is from a random stranger on the internet, so trust it less, but it’s easy enough to try drinking water first thing in the morning. However I don’t do this consistently – drinking water is boring, coffee tastes nice. I was doing this and not much was changing. Drinking no coffee after 3, but drinking quite a lot in the morning. Stopping regularly with my work to go make some more coffee or re-heat the coffee pot.

In the past I got into Headspace for doing meditation. I kept it up for about 2 years, but I stopped about 18 months ago. I hadn’t noticed any productivity increase during that period so I didn’t see the benefit, but looking back I realised that I managed to spend a full 18 months learning Machine Learning and Deep Learning through Coursera and also learning about Category Theory – almost entirely in the evenings. So it does seem like meditation helps – but maybe I’m just making up the link, there’s lots of other factors involved. So I tried to restart doing Headspace 6 months ago, but I would do it maybe one or two times and then stop for weeks on end, nothing changed. In an attempt to get myself to get Headspace done in the morning I came up with this phrase:

“Headspace is better than coffee.”

My main idea behind it is that Headspace does something similar to coffee – it helps me concentrate but it also has long term benefits. Coffee does bugger all except make you run around like a headless chicken for 30 minutes until you get the next ‘fix’ of caffeine. This statement is all well and good but it still didn’t change my habits. I had similar ideas about apples, that it’s better to eat an apple than drink coffee – but not sure where I heard that from.

I have finally though come to the conclusion that coffee is completely ineffective. It doesn’t work, it doesn’t help, so the only thing left to do is stop taking it completely. I might as well be completely unproductive without coffee than completely unproductive with coffee. Instead, in the mornings, I have switched to drinking green tea with a slice of lemon squeezed into it (and sometimes with local honey added too). There’s lots of benefits to green tea, so at least it’s useful. Then on top of that I take some grapes or fresh pineapple up with me as I start work.

Further to this I have added that I spend 5 minutes doing yoga and then the 15 minutes it takes to get Headspace done. The irony is that by stopping making coffee, I actually save about 15 minutes a day from the time I spend faffing around brewing and re-heating the stuff, so I now have time for Headspace. I’ve been doing this for 3 weeks now and the turnaround has been fantastic. My mornings are now almost totally focussed and I get a full morning’s work done.

Aside from that I also make sure to leave my phone outside the room that I’m working in and I also stopped going for walks in the morning. Walks are nice but they added too much pressure from the hour I lost going for a walk. I hope to start walks again in the afternoons because I know they’re good, but for now they’ll have to wait.

Vim Language Server Client (LSC) plugin first impressions

After months of problems with CoC: https://github.com/neoclide/coc-eslint/issues/72 I’ve given up and I’m trying other plugins.

I used ALE for ages with eslint, but never got it decently working using LSP. I tried ALE with Deoplete – but ALE is very slow and deoplete took ages to configure (python3 + pip + msgpack 1.0+ is a pain to install).

I came across this blog post showing love for Language Server Client (LSC): https://bluz71.github.io/2019/10/16/lsp-in-vim-with-the-lsc-plugin.html.

But this had a problem that it doesn’t work for linting, only LSP and I need eslint.

However this reddit comment described perfectly the use of installing ALE and LSC.

ALE with eslint, LSC with tsserver (+ LSP compliant wrapper typescript-language-server).

This seems like a good combo. ALE was reliable for ages and slow linting doesn’t matter too much, it’s slow auto-completion that’s a problem. Also ALE doesn’t have the floating window features that CoC and LSC do.

LSC along with ALE is almost entirely Vimscript (with some Dart that doesn’t require a further install) which makes it an easy install – no further dependencies makes plugins much better with Vim.

Installation of raw plugin

First thing that disappoints me currently in the instructions of the plugin is not having the Plugin code to copy-paste, I love where I can blindly follow the instructions with copy paste and it all magically works.

So to install the plugin in your .vimrc or init.nvim for example using Plug or Vundle:

Plug

Plug 'natebosch/vim-lsc'

Then run :PlugInstall

Vundle

Plugin 'natebosch/vim-lsc'

Then run :PluginInstall

Pathogen

git clone https://github.com/natebosch/vim-lsc ~/.vim/bundle/vim-lsc

Then run :Helptags to generate help tags

Dein

call dein#add('natebosch/vim-lsc')

This will work too with any of the other plugin managers that support github plugins as sources.

Installation of typescript language server

However at this point you have a problem because there is nothing to indicate that anything is working. No auto-completion happens, nothing.

If you install Coc, from what I remember immediately auto-completion starts happening.

You can run :LSC<tab> which should open up a list of the possible LSC commands to at least show that you have it installed.

Coc starts working for pretty much every file with an auto-complete using the words in the current file. This is great for seeing the immediate response that you know you’ve installed the plugin correctly. Plus with Coc you can run :CocInstall coc-eslint which starts installing the eslint and then works again immediately from what I remember.

I want to test LSC with the typescript LSP, which I’d already installed with:

npm install --global typescript

Now fair enough but rather frustratingly the typescript tsserver isn’t a fully compliant LSP, so you need the LSP compliant wrapper typescript-language-server mentioned above install globally same as typescript:

npm install --global typescript-language-server

I have two specific requirements which makes my commands differ:

  1. I use yarn
  2. I installed yarn/npm using sudo

So my actual command was:

sudo yarn global add typescript typescript-language-server

Coc mimics VS Code and works with tsserver out of the box which saves you from having to install the extra library. If LSC could be made to work with tsserver it would be a nice step. Coc even goes so far as to install tsserver for you so you just need CocInstall coc-tsserver and the magic starts happening. So you can install and get Coc working without having to leave Vim – the same happens for eslint because typically developers using eslint will already have it in their project and this just gets picked up magically.

The frustration of typescript-language-server is that there is the far too similarly named javascript-typescript-langserver, I have no idea of the difference nor do I really care, I just want the one that works. The LSC documentation for JavaScript language servers fails for this it shows me how to configure both of them but gives me no idea which one I should prefer.

I’m very much a proponent for the “don’t make me think” mantra because that’s what most people are after when they’re trying to install a plugin.

Why go through all the work of writing your plugin in Vimscript only to leave the documentation bare leaving people frustrated.

Configuration

Annoyingly the configuration for Javascript is buried. There’s no mention in the README that there is a wiki that lists all the language server configurations, and even in the wiki the home page is bare, so you have to spot the pages menu on the left-hand side.

Then when you get to the Javascript section you have the previously mentioned problem that there are two servers and you don’t know which to choose.

So I already have tsserver installed and that’s used by VS Code and so is used by 90% of all developers now and so I’ll use that.

let g:lsc_server_commands = {
  \ 'javascript': 'typescript-language-server --stdio'
  \ }

Futher frustration though is that there’s no comments in there giving helpful tips on how to set them up properly. The bluz71 blog above has the useful extra hint:

For LSC to function please ensure typescript-language-server is available in your $PATH.

So you should make sure to add the npm/yarn global installation directory into your path – it’s easy enough to find instructions for this. To test make sure you can run this in the directory where you start Vim:

$ typescript-language-server --version
0.4.0

Obviously you’ll probably get some other version number, but you should at least get a response. You don’t set up a path to the language server binary in the config so it assumes you’ve got it directly available.

That’s all folks… not quite

That should be that. With a restart of Vim the magic should happen – open up a Javascript file, start typing away and BAM, auto-complete pop-ups should start appearing.

However for me it didn’t, patiently typing a few characters and then reading documentation on how many characters to type before something happened did nothing.

I tried a :LSClientGoToDefinition and it spewed out an error:

Error detected while processing function lsc#reference#goToDefinition[2]..lsc#server#userCall:
line    2:
E684: list index out of range: 0
E15: Invalid expression: lsc#server#forFileType(&filetype)[0]

Firstly getting errors is always bad and secondly this error message makes no sense.

The problem here is that there is no ‘health check’ that I could find. ALE gives a very good diagnostics page via :ALEInfo. The LSClientAllDiagnostics and :LSClientWindowDiagnostics that sound like they might be useful aren’t at all in this situation.

Even after reading through :help lsc I did not spot anything to help with spotting issues. But the intro there is very helpful:

There is no install step for vim-lsc. Configure the filetypes that are tracked
by installed language servers in the variable “g:lsc_server_commands”. Each
value in this dict should be a string which corresponds to either an
executable in your “$PATH”, an absolute path to an executable, or a
“host:port” pair. If multiple filetypes are tracked by the same server they
should be entered as separate keys with the same value. The value may also be
a dict to allow for additional configuration.

It was only after re-reading the bluz71 blog again that I spotted my problem:

For a given filetype the LSC plugin will take care of launching, communicating and shutting down the named language server command.

My problem is that because I have the mxw/vim-jsx plugin, my javascript filetype becomes javascript.jsx, so my config needs:

let g:lsc_server_commands = {
  \ 'javascript.jsx': 'typescript-language-server --stdio'
  \ }

Now I did a re-source of my .vimrc via :source % and then tried again with my Javascript file and nothing worked still.

However doing a restart of Vim, now I got an error that flashed up in the Vim command line and disappeared, but then finally the magic started to happen.

So to know if LSC is working, the first thing you notice is that it subtly highlights the word you are on and any other words (‘symbols’) that match that.

Now auto-completion starts working and I can tweak away with key mappings. However I don’t really care about key mappings – they’re easy to tweak.

Final thoughts

This does seem like a great plugin now that it’s working. It has the speed and functionality of Coc and it works which is a major plus point over Coc at the moment.

What I fundamentally care about when trying these LSP plugins is getting something to work as fast as possible so I can test out the plugin. I can then add other language servers and configurations, but until I’ve got something working there’s nothing but frustration.

Embracing the Neovim/Vim Terminal

I’ve only finally just started using Neovim’s terminal. Vim 8 has this too of course, but it was Neovim that championed it and included it by default.

I switched to Neovim about a year ago to see if there was anything missing compared to Vim. I find Neovim fascinating purely because of how faithful it is to Vim. They had the perfect answer to open source – “if you don’t like it fork it” and make your own. Vim itself is healthier because of the additions that Neovim included.

I literally can’t visually tell if I am using Vim or Neovim. You have the one initial setup to use your existing .vimrc file and the rest just works perfectly.

I’ve never had to raise a bug because of some incompatibility. All the Vim plugins I use work.

The sheer scale of porting an editor that has been around for 30 years to implement what appears to be 99.9% of it’s functionality is amazing.

First steps

One of the big things of Neovim was its inclusion of a terminal.

I’d tried the terminal a couple of times but I found it jarring, it’s like the Vim experience all over again – you’ve no idea how to quit.

You need a combination of Ctrl + \, Ctrl + n, which is just as insane as :wq for those who don’t know Vim. I don’t know whether it’s an intentional homage, but all I know is that it scared me off for a year.

Somewhere during that year I tried tentatively again and added this (probably from this Vi StackExchange answer) as suggested to my vimrc:

" terminal easy escape
tnoremap <Esc> <C-\><C-n>

But then I just left it there. I always just kept a separate terminal open to run the commands I needed there. This currently is running a bunch of node docker containers. This obviously means alt + tab between the windows, or ctrl + pg up/pg down if you have terminal tabs (I’m using Ubuntu mostly).

However I kept seeing my colleagues running their terminal session within VS Code. I’d roll my eyes at the wasted screen space, but it did seem kind of natural, you have another step in your development process combined into the one editor.

I was always a fan of “Linux is my IDE” so I also didn’t see a problem of switching to the terminal, it also feels ‘fairly natural’ as both Vim and any other terminal programs are still running in the terminal, so I saw it as a benefit of Vim.

It always seemed natural that if you’re running a terminal process you should open up an actual terminal, not some emulator, get yourself the real thing.

However there’s a few niggles:

  1. Searching the terminal output, there are ways but they aren’t as nice as the Vim searching
  2. Scrolling naturally around the output, to the top and back to the bottom – especially when its 10,000+ lines long of log output
  3. Copying and pasting between the terminal output and Vim
  4. It janks either wasting the screenspace of the terminal tabs or having to alt + tab to search for the other terminal window

Welcome back

So I dived in again a couple of days ago. Having the Esc remapping is so natural, I’d forgotten that I’d added it to my vimrc and was simply the first thing I’d tried to get out. I had to go searching for it again just cause I remembered the frustration of quitting from before and didn’t understand why it was so easy.

But now, suddenly it’s awesome. Once you’ve hit Esc it’s just another buffer. It can hide amongst my hundreds of other buffers, so no screenspace wasted. This obviously assumes you’re using Buffers not ‘Tabs’ (aka Tab Pages).

It resolves all the problems above:

  1. Searching – I now have my standard \ command to search and highlight plus grepping/ripgrep/fzf
  2. Scrolling – now again all the gg top and G bottom commands are so much nicer to use, I never feel the need to use the mouse to scroll aimlessly amongst the text, I can always use Ctrl + f/g and j/k too – all the lovely Vim commands are now available
  3. Copying and pasting is just Vim buffers no need for the external clipboard
  4. As it’s all just buffers is no wasted screenspace and it’s easy to switch to with buffer switching.

So where as I understand it’s just an emulator, not the real terminal, but bash is kind of designed to emulated so not a real problem I guess and now I get terminal + evil mode!

So thanks again Neovim for adding this in.

Steps of caution

There are some peculiarities of Neovim terminals to get used to.

  1. Esc is sometimes used in the terminal for genuine reasons, for example cancelling an fzf request – this now won’t work. My solution to this has been to have the escape sequence be <leader><Esc> this allows escape to still work, but be still easy to quit out.
  2. The inception of running Vim via an SSH session inside a Vim terminal has a few niggles, but handles pretty well.
  3. It’s a bit hard to re-find your terminal if you’ve got a couple open – you need to remember the buffer number mostly
  4. If you close Vim you’ll kill all the processes running there without warning

Getting started with ReasonML

I write this as a stumbling fumbling guide for how to actually get started, when all you have is a Javascript and React backgroud with no knowledge of OCaml.

I’m using a combination of Fedora and Ubuntu.

Obviously start here: https://reasonml.github.io/

Installation

I even went off the rails at this point because you need globally installed bsb support and you run into the sudo/not-sudo argument. Lots of people want you to install yarn/npm as non-sudo, but I do it as sudo.

The guide suggests that you do it without sudo and makes no mention of using sudo. So for people like me the documentation goes wrong from the very first step. This makes me sad.

The issue being that because I install npm with the Fedora package manager it’s installed as sudo. So I need to run sudo npm install -g which is all fine and good, but some well meaning people rightly express that this should be avoided if possible, but in the case of Fedora this is unavoidable.

I wrote this all up in an issue #2168, with the most relevant comment I made:

In the npm troubleshooting guide and Grunt getting started guide they have the following advice for global installs:

(You may need to prefix these commands with sudo, especially on Linux, or OS X if you installed Node using its default installer.)

I also hit the same problem with my ubuntu install.

If I run without sudo then the yarn add completes fine, but I get the following error if I try to run bsb:

Command ‘bsb’ not found

So I have to run:

sudo yarn global add bs-platform

However now, you have to run bsb with sudo too. Otherwise you get the following error:

$ bsb -init hello -theme basic-reason
Making directory hello
npm WARN checkPermissions Missing write access to /usr/lib/node_modules
npm ERR! code EACCES
npm ERR! syscall access
npm ERR! path /usr/lib/node_modules
npm ERR! errno -13
npm ERR! Error: EACCES: permission denied, access '/usr/lib/node_modules'
npm ERR!  { [Error: EACCES: permission denied, access '/usr/lib/node_modules']
npm ERR!   stack:
npm ERR!    'Error: EACCES: permission denied, access \'/usr/lib/node_modules\'',
npm ERR!   errno: -13,
npm ERR!   code: 'EACCES',
npm ERR!   syscall: 'access',
npm ERR!   path: '/usr/lib/node_modules' }
npm ERR!
npm ERR! The operation was rejected by your operating system.
npm ERR! It is likely you do not have the permissions to access this file as the current user
npm ERR!
npm ERR! If you believe this might be a permissions issue, please double-check the
npm ERR! permissions of the file and its containing directories, or try running
npm ERR! the command again as root/Administrator.

npm ERR! A complete log of this run can be found in:
npm ERR!     /home/channi16/.npm/_logs/2020-05-02T11_56_03_062Z-debug.log
failed to run : npm link bs-platform

So you have to run:

sudo bsb -init hello -theme basic-reason

The problem with this is that it then creates the hello directory and all its contents as root. However you don’t always need to use sudo. It seems like it’s either the first time you use it, or perhaps when you use a new template. However after than you can run bsb -init my-dir -theme basic-reason without sudo. It’s not even when using a new theme, you can init with un-used themes. It appears to be just the first time that you run bsb -init that it requires sudo.

Editor plugins

The advice here is quite simple. Use VS Code unless you’re like me and you want your freedom. In that case, if you use Vim, then you’re in luck. I’ve wasted the 10 hours of time for you trying all the different combinations of plugins. See my previous blog post on Using ReasonML with Vim / Neovim.

Initial coding of the demo project

I started with the demo project, this builds a Demo.re file to a Demo.bs.js file that can be run in node.

React our way up the tree

I wanted to get display into the browser – but the basic demo is node only. I want raw JS.

The react example gives raw JS output – but it has to go through webpack.

Kinda annoying – but I guess the webpack converts the bucklescript node javascript into browser style javascript.

Installing the demo

I ran the following to install the create-react demo:

bsb -theme react -init neural-network-re

Running the demo

This installs a runnable demo. I of course hit an error when running the code:

[ian@localhost neural-network-re]$ npm run webpack

> neural-network-re@0.1.0 webpack /var/www/vhosts/reasonml/neural-network-re
> webpack -w

/var/www/vhosts/reasonml/neural-network-re/node_modules/webpack-cli/bin/config-yargs.js:89
describe: optionsSchema.definitions.output.properties.path.description,
                                           ^

TypeError: Cannot read property 'properties' of undefined
    at module.exports (/var/www/vhosts/reasonml/neural-network-re/node_modules/webpack-cli/bin/config-yargs.js:89:48)
    at /var/www/vhosts/reasonml/neural-network-re/node_modules/webpack-cli/bin/webpack.js:60:27
    at Object.<anonymous> (/var/www/vhosts/reasonml/neural-network-re/node_modules/webpack-cli/bin/webpack.js:515:3)
    at Module._compile (module.js:653:30)
    at Object.Module._extensions..js (module.js:664:10)
    at Module.load (module.js:566:32)
    at tryModuleLoad (module.js:506:12)
    at Function.Module._load (module.js:498:3)
    at Module.require (module.js:597:17)
    at require (internal/module.js:11:18)
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! neural-network-re@0.1.0 webpack: `webpack -w`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the neural-network-re@0.1.0 webpack script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:
npm ERR!     /home/ian/.npm/_logs/2018-11-26T22_05_12_616Z-debug.log

It’s so frustrating when things don’t ‘just work’. Anyway it turns out to be a webpack error that they fixed in the project back in October, the fix is to upgrade the webpack-cli version in the package.json.

Also it looks like I’m running an old version of bsb as it appears to install an old version of the project. My version has the stupid idea of trying to get hot reloading running first, rather than a simple build.

It looks like in the new version they simplify it and just run the regular build. At least then it’s obvious that it’s a webpack error rather than some weird compilation error. It’s now just:

npm install
npm run build
npm run webpack

Then open src/index.html

That is pretty sweet! Got me a lovely simple index.html page up and running.

Hot reloading

Now we can attempt the hot reloading.

npm run start

Then in a separate terminal:

npm run webpack

This then hot reloads the code – but it doesn’t auto refresh the webpage – but at least I understand it.

Steps towards a Neural Network

My reasons for doing all this is to convert a simple Neural Network I wrote in JavaScript into ReasonML.

Now the first thing I want to try and do is get some SVG displaying, this isn’t so simple because you need to do it in JSX that gets output to god knows what that eventually produces an SVG.

However via a forum post there is a link for how to display a basic SVG. The slightly cryptic thing in there is that it doesn’t display anything to the screen, it just creates the SVG in a variable.

I made some modifications to the demo code to it to output a simple SVG circle and then got the SVG displaying to the screen too

let m =
  <svg
    width="200"
    height="200"
    viewBox="0 0 200 200"
    xmlns="http://www.w3.org/2000/svg">
    <circle
      cx="5"
      cy="5"
      r="5"
      style=(ReactDOMRe.Style.make(~fill="black", ()))
      xmlns="http://www.w3.org/2000/svg"
    />
  </svg>;

ReactDOMRe.renderToElementWithId(m, "target");

The online tool (https://reason.surge.sh) is quite interesting because you get a lot more tooltips than I’m used to (this is because it has the language server plugged in). Also the fun with displaying the style attribute which was actually relatively painless. The ReasonReact Docs for Style gave me the perfect example:

<div style=(
  ReactDOMRe.Style.make(~color="#444444", ~fontSize="68px", ())
)/>

So now I’ve got an SVG circle appearing in the web version and I can use this in my local version without problem.

Dom dom dom…

Ideally for my basic project I didn’t want to use React. I have a simple HTML page that injects a few <p> tags and some <svg> circles. I do this easily enough with pure JavaScript, so I shouldn’t need React.

However trying it was very painful. Every DOM Element in Reason is an Option. So you have to spend your life with this ‘could be null’ chap and handle this could be null at every stage.

Basically it seems like the DOM is a type nightmare and trying to apply correct types to it produces your own personal hell.

  let _ =
    Document.getElementById("root", document)
    ->Belt.Option.map(_, Element.setInnerText(_, "Hello"));
  ();

  let el =
    document
    |> Document.createElement("p")
    |> Element.asHtmlElement
    |> unwrapUnsafely;
  /* let root = Document.getElementById("root", document); */
  document
  |> Document.getElementById("root")
  |> map(Element.appendChild(el));

The above is the equivalent of:

var root = Document.getElementById("root");
root.innerText = "Hello";

var el = Document.createElement("p")
Document.getElementById("root").appendChild(el);

But trying to create and element, set it’s innerText and then append it was beyond me after 8 hours of coding.

This is along with using the experimental (but interesting) library bs-webapi, which is also referred to in a useful book on Web Development with ReasonML.

It’s at this point I figure that ReasonReact seems to handle the Dom interaction better. I was able to get a fairly complex svg element displaying which is the hardest part. So perhaps just better to stick to that.

Hidden pipes

Note that here they use the “reverse-application” operator |>. There’s an excellent description about this in this 2ality blog post on ReasonML functions:

The operator |> is called reverse-application operator or pipe operator. It lets you chain function calls: x |> f is the same as f(x). That may not look like much, but it is quite useful when combining function calls.

In the reason docs they refer to the -> operator instead in the Pipe First section:

-> is a convenient operator that allows you to “flip” your code inside-out. a(b) becomes b->a. It’s a piece of syntax that doesn’t have any runtime cost.

It seems that -> is a subtly different operator from |> as the only reference to |> appears later on that page:

We also cannot use the |> operator here, since the object comes first in the binding. But -> works!

Frustratingly though the reason documentation just includes this mention of |> without actually saying what it is. The operator comes direct from OCaml, see ‘Composition operators’ in the Stdlib documentation.

Note that there is the similarity to the Javascript pipeline operator.

Belt

Also note the use of Belt.Option.map above. The Belt module is a community driven helper module:

A stdlib shipped with BuckleScript This stdlib is still in beta but we encourage you to try it out and give us feedback. Motivation The motivation for creating such library is to provide BuckleScript users a better end-to-end user experience, since the original OCaml stdlib was not written with JS in mind.

Easier DOM

Trying the latest bsb -init my-react-app -theme react-hooks includes the following snippet in src/Index.re:

  [@bs.val] external document: Js.t({..}) = "document";
  
  let title = document##createElement("div");
  title##className #= "containerTitle";
  title##innerText #= text;

which generates:

  var title = document.createElement("div");
  title.className = "containerTitle";
  title.innerText = text;

Now this looks a lot simpler. But of course I’ve no idea what the ## or #= operators do.

That’s as far as I’ve got, but at least now we have code that looks like recognisable JavaScript.

Using ReasonML with Vim / Neovim

Here’s my attempts to get ReasonML working within Vim and the journey it took me on to understand what language servers are. If you’re using Vim this is essentially step 2 of the ‘quick start’ guide for reason: editor plugins.

Warning to all who enter here… this took me 10+ hours to fix. It’s because my setup doesn’t mesh exactly with their expected setup. If you want an easy life just use VS Code. Failing that if you want an easy life with Vim, just use the following setup:

This assumes that you can install the LanguageServer which you have to do for VS Code as well.

Vim support

By default for using linting I have the following setup:

  • Vim 8
  • Vundle
  • ALE (Asynchronous Lint Engine)

This has managed to work ‘OK’ for Javascript with a bit of linting. But I never got fully fledged Language Server Protocol (LSP) integration working, I don’t really know why.

My initial attempts with my default setup were a total failure. Mostly because I find the LSP concept hard to understand. ALE should be a LSP ‘thing’. So it should be able to act as a Language Client and talk to Language Servers.

The repeated issue I come across is that to get LSP working you need the vim-plug Vim plugin manager. I don’t particularly know why, I guess it works better for these more complex plugins.

Looking through my .vimrc file it looks like I tried to get ALE to work as a LSP.

I switched to Neovim as part of this to minimise the number of plugins that I’m installing.

Switching from Vundle to vim-plug

Until now I’ve always used Vundle, it’s a good basic plugin. But I keep hitting plugins that don’t have instructions for Vundle and just vim-plug. vim-plug seems as good as Vundle, so let’s try it and see how much work the conversion is.

It’s actually ridiculously easy, much respect for vim-plug, and probably Vundle for both having very similar and easy to replace syntax. I replaced the Vundle lines at the start:

-filetype off                  " required
-set rtp+=~/.vim/bundle/Vundle.vim
-call vundle#begin()
-
-Plugin 'VundleVim/Vundle.vim'
+call plug#begin('~/.vim/bundle')

Then I replace all Plugin with Plug and then at the end:

-call vundle#end()
-filetype plugin indent on    " required
+call plug#end()

Then I ran :source % and :PlugInstall and it magically installed all my 31 plugins in 10s and they all seem to be magically working. If you use the .vim/bundle installation directory then vim-plug doesn’t even need to install anything.

There’s some useful instructions in the vim-plug wiki on Migrating from Vundle.

Magically also my installation of deoplete worked correctly. So now deoplete pops up all the time as I’m typing – I’m assuming it’s possible to make it less in your face…

if has('nvim')
  Plug 'Shougo/deoplete.nvim', { 'do': ':UpdateRemotePlugins' }
else
  Plug 'Shougo/deoplete.nvim'
  Plug 'roxma/nvim-yarp'
  Plug 'roxma/vim-hug-neovim-rpc'
endif

Because I’ve switched to neovim it simplifies that installation as well.

But I’ve now fallen into the neovim only trap. Because my installation for deoplete only works with neovim, then I get an error if I try to use vim now. Perhaps I can just comment the whole block for if deoplete installed out. This works, so now vim-plug just doesn’t use deoplete if I’m in Vim.

Note that Deoplete should start working immediately after you install it and you should start seeing a popup box as you type.

Installing the language server

This was just downloading the zip from the language server releases page and unzipping it to ~/rls-linux.

Figuring out how to get ALE to work with reason-language-server

I’m using ALE rather than the recommended autozimu/LanguageClient-neovim. ALE is also a Language Client and so should also work fine.

Install via:

Plug 'dense-analysis/ale'

ALE needs to know about the existance of the reason-language-server and thankfully that support has been added. You can go to :help ale-reasonml-ols and it tells you the correct config:

  let g:ale_reason_ls_executable = '~/rls-linux/reason-language-server'

After this I expected to restart Vim and it to magically work which it didn’t.

The path has to be absolute (as noted in the config information for the LanguageClient-neovim in the README):

  let g:ale_reason_ls_executable = '/home/ian/rls-linux/reason-language-server'

You can also use the magic of vimscripts expand to handle it for you too:

  let g:ale_reason_ls_executable = expand('~/rls-linux/reason-language-server')

This then showed promise. Completion would work nicely and you get useful information coming up as you type, this also integrated with ALE via:

call deoplete#custom#option('sources', {
\ '_': ['ale'],
\})

It took a while to see if all the functionality was there. It pickedup linting errors which are put into the location list (:lopen) and would give useful info with :ALEHover

I compared it to VS Code, which does manage to implement this better. The error messages get formatted properly, for some reason the line breaks in the Vim error messages don’t get applied. Also you get the ‘Hover’ (equivalent to ALEHover) info showing up as you type in context.

I then tried to format the code. ALE has a :ALEFix command that I know works from eslint. It helpfully suggests that you need to configure the correct ‘fixer’ in .vimrc. However once that has been configured running ALEFix does nothing. I installed VS Code to check that using that along with the reason-language-server does correctly format the code – which it does. So there appears to be some problem with ALE.

Time to try another plugin…

So I can either try the Rust made LanguageClient, or the Typescript COC.

I’ve heard about COC a couple of times and it’s designed to be a VS Code matching ‘LSPy thing’. So you should be able to configure Language Clients almost exactly as they do with VS Code but inside Vim.

But hey, I prefer Rust so let’s try that first…

Trying LanguageClient-neovim instead of ALE

Now that I’ve switched to vim-plug installation of LanguageClient-neovim is easy because it includes the manual install step in the .vimrc code.

Then also the configuration was easy because it’s included in the vim-reason-plus README:

let g:LanguageClient_serverCommands = {
    \ 'reason': ['/absolute/path/to/reason-language-server.exe'],
    \ }

Note again here, that it mentions ‘absolute path’, so you have to use /home/ian instead of ~, although I still just use expand('~/path')

Now once I had installed that and restarted NVim then it all worked pretty smoothly.

Firstly the errors show up in the quickfix list :copen as well as to the right of the code. It’s not quite formatted as nicely with the line breaks as VS Code but is at least a consistent block of text, not spaced out with extra padding where the line breaks should be. I suspect though that using the location list is better as it won’t overwrite any searches that I’ve done.

Interstingly the ‘quickfix list’ is actually supposed to show errors according to :help :copen. So perhaps my searches should be in the location list. This can be changed via:

    let g:LanguageClient_diagnosticsList = 'Location'

The commands for the LanguageClient are more confusing though.

:ALEHover vs call LanguageClient#textDocument_hover()<cr>
:ALEFix vs call LanguageClient#textDocument_formatting()<cr>

But you can fix that through the vimrc mappings, but now happily the formatting did work which is what I want. Shit works without too much hassle.

Try getting it to work using COC

One nice property of COC is that it doesn’t use any advanced features of vim-plug, so this will probably all still work with Vundle. Also it looks like it combines the Auto-completion and LSP in one plugin.

Install nodejs – you can follow the instructions in https://github.com/neoclide/coc.nvim/wiki/Install-coc.nvim.
Remove Deoplete and LanguageClient-neovim and any settings from your .vimrc file. Then add COC:

Plug 'neoclide/coc.nvim', {'branch': 'release'}

Then reload the vimrc with :source % and run :PlugInstall. Warning you’ll get errors if you haven’t cleared all the deoplete settings. You might still need to restart Vim.

Install the reason extension

COC works similarly to VS Code in that it requires you install extensions to get it to work with certain language servers.

Install the reason extension, which is assuming that you are using the reason-language-server rather than the OCaml or Merlin LSP:

:CocInstall coc-reason

I ran this and it froze my Neovim. Closing and re-opening the terminal got it working again, but not good.

Now you’ll need to configure it as you always need to specify the absolute path to the reason-language-server. Handily there is a configuration section for Reason

Run :CocConfig and if it’s empty (because this is the first time of using it), then you first need to insert an empty root object:

{
}

Then put the following config inside the root object:

  "languageserver": {
     "reason": {
      "command": "/absolute/path/to/reason-language-server",
      "filetypes": ["reason"]
     }
  }
Troubleshooting

Don’t forget to put the correct absolute path for the command. Here now you can’t use the vimscript expand, so just use /home/[username].

e.g.: "/home/ian/rls-linux/reason-language-server"

Otherwise you’ll get:

[coc.nvim] Server languageserver.reason failed to start: Command “reason-language-server” of languageserver.reason is not executable
: Error: not found: reason-language-server

As soon as you set the correct path you should immediately start getting auto-completion and LSP goodies in your reason files.

Another possible error you can get is:

[coc.nvim] Server languageserver.reason failed to start: Cannot read property ‘reader’

This is a connected error that means you have the command path wrong. In my case I’d just written "/home/rls-linux/reason-language-server"

Further I tested it on a basic reason file created in an empty directory which gives this error:

[coc.nvim] No root directory found

This appears to be a reason-language-server issue #334. You need to initialise the directory as per the installation page.

The auto-completion seems to work very nicely, but I noticed that the error messages are severely truncted, for example:

src/Index.re|7 col 34 error| [undefined] Error: This expression has type [E]
src/Index.re|7 col 59 error| [undefined] Error: The function applied to this argument has type [E]

Actually you can get the full error, using :call CocAction('diagnosticInfo'), or :call CocAction('diagnosticPreview').

This gives a better error than I got with ALE or LanguageClient-neovim:

[reason] [E] Error: This expression has type
(~message: string) =>
ReasonReact.componentSpec(ReasonReact.stateless,
ReasonReact.stateless,
ReasonReact.noRetainedProps,
ReasonReact.noRetainedProps,
ReasonReact.actionless)
but an expression was expected of type
ReasonReact.component(‘a, ‘b, ‘c) =
ReasonReact.componentSpec(‘a, ‘a, ‘b, ‘b, ‘c)

To get code formatting working you need to run: :call CocAction('format'). So its similar to the LanguageClient-neovim in that you’ll probably want to create a whole bunch of vimrc shortcuts. But at least it does format which is a step up from ALE.

Hover info is through :call CocAction('doHover').

One nice thing about this plugin is that it combines the Complete and LSP plugins and so the Plug config is:

Plug 'neoclide/coc.nvim', {'branch': 'release'}

Instead of:

Plug 'autozimu/LanguageClient-neovim', {
    \ 'branch': 'next',
    \ 'do': 'bash install.sh',
    \ }

" for neovim
if has('nvim')
  Plug 'Shougo/deoplete.nvim', { 'do': ':UpdateRemotePlugins' }
" for vim 8 with python
else
  Plug 'Shougo/deoplete.nvim'
  Plug 'roxma/nvim-yarp'
  Plug 'roxma/vim-hug-neovim-rpc'
  " the path to python3 is obtained through executing `:echo exepath('python3')` in vim
  let g:python3_host_prog = "/absolute/path/to/python3"
endif

So your choices are…

Note these are recommendations, you probably can get Vim/Vundle to work with these, but I’ve simplified my life to match the instructions that maintainers give out.

  1. Neovim, vim-plug, ALE, deoplete: format doesn’t appear to work, error messages are very ugly, deoplete has some nice integrations with fzf
  2. Neovim, vim-plug, LanguageClient-neovim, deoplete: has the most nice touches and seems to work the best with little config, format works, but error messages are still pushed into one long line that can be too long for the location list
  3. Vim/Neovim, vim-plug, COC: simplest plugin setup, but took a while to understand how to configure, doesn’t work so nicely out of the box, format works, appears to give the best formatted error messages – when you look for them. It’s easier if they just appear in the location/quickfix list. However I guess that’s the problem, location/quickfix don’t allow for multiline error messages. Also in this seems the most light weight of the plugins as it doesn’t bundle all possible language server configurations, you install them as extensions.

Futher work with COC

I’ve since had further thoughts with COC. Things that are actually fairly magical.

  1. If you’re a Javascript developer then working with eslint is very common. The coc-eslint plugin magically works with eslint straight off. I had all sorts of problems with eslint and ALE, which required eslint_d and neomake for reasons that I can’t quite remember.
  2. By default the errors in COC don’t show until you switch from insert mode to normal mode. This is actually a better experience in my opinion as it reduces the amount of constant information that you’re getting. There’s no need to display an error just because you haven’t typed something yet.
  3. So it means that I can replace, ALE, Neomake, Deoplete, LanguageServer-neovim with COC. COC requires node to be installed but beyond that there’s no specific vimrc config, so should allow using Vundle – although it does require a specific branch which I don’t think Vundle can handle. Also without Deoplete there’s no difference between using Vim or Neovim which is great.

A dive into the many faces of manifolds

What are manifolds? I started down this rabbit hole because of Chris Colah’s excellent Neural Networks / Functional programming blog post:

At their crudest, types in computer science are a way of embedding some kind of data in n bits. Similarly, representations in deep learning are a way to embed a data manifold in n dimensions.

So the problem is that people talk about them even when you don’t think you need to care about topology, or even really know what topology means.

The following is a bit of a dive into examples of manifolds with the intention of better understanding what a manifold is.

Starting somewhere

Opening quote from Wikipedia:

In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point.

Without knowing exactly what topological and Euclidean spaces are that’s hard to understand. Let’s try simple Wikipedia:

A manifold is a concept from mathematics. Making a manifold is like making a flat map of a sphere (the Earth).

Ok, kinda. Except this is nicely confused by further reading the Wikipedia page which has:

A ball (sphere plus interior) is a 3-manifold with boundary. Its boundary is a sphere, a 2-manifold.

So, to me, this makes no sense of creating a manifold (a flat map) from a manifold (a sphere is a 2-manifold). Why create a manifold if you’ve already got one? (My thinking here is wrong, but I’m trying to explain all my incorrect thinking as I go and clear it up at the end)

Bounds checking

Some kind of useful quotes from the section on boundaries:

A piece of paper is a 2-manifold with a 1-manifold boundary (a line)

A ball is a 3-manifold with a 2-manifold boundary (a sphere)

Show me the money

‘Simple’ examples are usually the way for me out of confusion. The list of manifolds helps a lot:

  1. \mathbb{R}^n are all manifolds, so a line \mathbb{R} is a 1-manifold
  2. A x,y 2D plane \mathbb{R}^2 is a 2-manifold
  3. All spheres \mathbb{S}^n are manifolds… stop there, I’m confused

Spheres

This really gets me. Why do we need to create maps of the world if it’s already a manifold. The simple Wikipedia explanation makes sense, i.e. it’s obvious that we want to create a flat map of the world. It also makes sense that where as a triangle on the globe doesn’t add up to 180 degrees (bad) does nicely add up to 180 degrees on a flat map (good). But why, if the earth is already a manifold do we want to create another manifold from that. Isn’t being a manifold good enough? Some people are never satisfied.

n-manifold

A sphere is a 2-manifold, so although it is a 3D object, close to any one point it looks like a 2D grid.

The important part is the n-manifold. This is what cleared it up for me, you don’t care about creating one manifold from another.

An n-manifold resembles the nth dimension near one point. So a flat map is a 2-manifold and the earth is a 2-manifold because they both represent a 2D grid near one point.

Manifold Hypothesis

In another post from Chris Colah, he talks explicitly about manifolds. But he talks about them assuming that you know what they are:

The manifold hypothesis is that natural data forms lower-dimensional manifolds in its embedding space.

*scream*

Nash equilibria

Although John Nash is most famous for his Nash equilibrium, but it a lot of his most important work was to do with manifolds:

His famous work on the existence of smooth isometric embeddings of Riemannian manifolds into Euclidean space.

Differentiation

Interestingly differentiating a curve at a point on that curve gives you the slope of the flat line at that point. So there’s a connection between differentiation and manifolds.

There’s a Calculus on Manifolds book that looks interesting, taken from this talk on The simple essence of automatic differentiation. Then there is a link back to Chris Olah’s NN/FP post based on this Conal Elliot’s Automatic Differentiation paper.

Euclid forgive me

To be honest even Euclidean space gets me confused. That’s basically just \mathbb{R}^n which I can understand better. But is there a specific reason for using Euclidean? Is there some extra property of Euclidean space that isn’t inherent in \mathbb{R}^n? There’s a fundamental principle of parallel lines not converging in Euclidean space, but then if a manifold is in Euclidean space, how can a sphere be a manifold? It doesn’t explain why we want to convert one manifold into another (as in the spheres section above), or even really know what topology means.

Euclid’s grid

I keep thinking of Euclidean space as basically everything. But it’s not – it’s a n-dimensional grid (with infinite points as it’s the real number line in each direction) and it has straight edges. So a ball isn’t in Euclidean space, or “isn’t Euclidean”, I’m not sure which.

In image analysis applications, one can consider images as functions
on the Euclidean space (plane), sampled on a grid

Geometric deep learning paper

Data manifolds (back to Chris)

Perhaps data manifolds are the structure of the data. For example as above an image data is a 2D Euclidean grid.

They get referred to in the MIT CS231n CNNs course when talking about ReLUs:

(-) Unfortunately, ReLU units can be fragile during training and can “die”. For example, a large gradient flowing through a ReLU neuron could cause the weights to update in such a way that the neuron will never activate on any datapoint again. If this happens, then the gradient flowing through the unit will forever be zero from that point on. That is, the ReLU units can irreversibly die during training since they can get knocked off the data manifold. For example, you may find that as much as 40% of your network can be “dead” (i.e. neurons that never activate across the entire training dataset) if the learning rate is set too high. With a proper setting of the learning rate this is less frequently an issue.

It’s as if data manifold is the useful/accessible part of the data.

What’s not Euclidean

For instance, in social networks,
the characteristics of users can be modeled as signals on the
vertices of the social graph [22]. Sensor networks are graph
models of distributed interconnected sensors, whose readings
are modelled as time-dependent signals on the vertices.

In computer graphics and vision, 3D objects are
modeled as Riemannian manifolds (surfaces) endowed with
properties such as color texture.

Geometric deep learning paper

Manifolds don’t have to Euclidean! But at one particular point on a manifold surface it is approximately a grid in the local area. So this holds if it is Euclidian, because locally on a sheet of paper it’s Euclidean as the sheet of paper is already Euclidean.

But certainly I like the explanation that 3D graphics are manifolds. So there’s nothing special that makes the earth a manifold it’s just one example of one. So I think this means that any 3D shape is a manifold.

So what’s not Euclidean and not a manifold?

What’s not a manifold?

That dear reader… is an exercise for you.

What did this get us?

  1. Euclidean space means a grid (duh)
  2. An image of pixels is a 2D grid – possibly the data manifold that Chris Olah was referring to
  3. A 3D graphic (or the earth) is a 2-manifold. These are not Euclidean but are 2D Euclidean (grid shaped) close to a point on their surface
  4. When your data is 3D graphics – that is your data manifold

Vim Airline Powerline fonts on Fedora, Ubuntu and Windows

N.B. I’ve also answered this on Vi Stack Exchange, but I’m posting it here as it took a lot of work.

This took hours to figure out, so here’s more of a dummies guide for Fedora/Ubuntu, with a special section for Windows.

The first is figuring out what the hell are those strange but nice angle brackets that appear in the vim-airline status bar. The background is that airline is a pure vim version of powerline (which was python), and powerline uses UTF-8 characters to insert those angle brackets. So vim-airline just uses the same UTF-8 characters.

Then even if you do manage to get one installed they look uglier than you’d hope because the fonts don’t fully work.

Configuring Vim

This is opposite to the official instructions but I had this bit wrong at the end which made me question all the font installations. So I suggest you get this configured first and then if you get the fonts working it should magically appear.

The final trick was forcing vim-airline to use the fonts it needs. In the official documentation it should just be adding let g:airline_powerline_fonts = 1 in your .vimrc. However I did this and no luck. There’s more information in :help airline-customization and that gives you some simple config settings that you need, just in case. This was the final magic sauce that I needed. I don’t know why this wasn’t automatically created. This is also mentioned in this Vi Stack Exchange answer.

    if !exists('g:airline_symbols')
        let g:airline_symbols = {}
    endif

    " unicode symbols
    let g:airline_left_sep = '»'
    let g:airline_left_sep = '▶'
    let g:airline_right_sep = '«'
    let g:airline_right_sep = '◀'
    let g:airline_symbols.crypt = '🔒'
    let g:airline_symbols.linenr = '☰'
    let g:airline_symbols.linenr = '␊'
    let g:airline_symbols.linenr = '␤'
    let g:airline_symbols.linenr = '¶'
    let g:airline_symbols.maxlinenr = ''
    let g:airline_symbols.maxlinenr = '㏑'
    let g:airline_symbols.branch = '⎇'
    let g:airline_symbols.paste = 'ρ'
    let g:airline_symbols.paste = 'Þ'
    let g:airline_symbols.paste = '∥'
    let g:airline_symbols.spell = 'Ꞩ'
    let g:airline_symbols.notexists = 'Ɇ'
    let g:airline_symbols.whitespace = 'Ξ'

    " powerline symbols
    let g:airline_left_sep = ''
    let g:airline_left_alt_sep = ''
    let g:airline_right_sep = ''
    let g:airline_right_alt_sep = ''
    let g:airline_symbols.branch = ''
    let g:airline_symbols.readonly = ''
    let g:airline_symbols.linenr = '☰'
    let g:airline_symbols.maxlinenr = ''

Kitchen sinking it on Fedora and Ubuntu

This is probably an overkill solution, but first you need to get it consistently working before you can simplify it.

  1. Install the general powerline font sudo dnf install powerline-fonts (or sudo apt install fonts-powerline) – this should mean that you can use any font you already have installed. If you don’t have an easy way of installing like dnf/apt then there’s instructions for manually doing it e.g. https://www.tecmint.com/powerline-adds-powerful-statuslines-and-prompts-to-vim-and-bash/, also the official documentation has instructions (https://powerline.readthedocs.io/en/latest/installation/linux.html#fonts-installation).

    Now close your terminal re-open and check that the Powerline symbols font is available if you edit the terminal preferences and set a custom font. You don’t want to use the font directly, just check that it’s available. Now try opening Vim and see if you have nice symbols.

  2. If the general powerline font didn’t work or if you’re trying to improve things you can try installing individual ‘patched’ fonts, this took a while to figure out, but you can literally just go to the folder you want in https://github.com/powerline/fonts/ and download it, the font that I’ve liked the most from my tests is the Source Code Pro patched font. Then just open the downloaded font file and click on ‘Install’.

    If you’d rather the command line, you can install all patched fonts:

    $ git clone https://github.com/powerline/fonts.git --depth=1
    $ fonts/install.sh
    $ rm -rf fonts
    

    This will install all the patched mono fonts, but then this gives you a chance to explore the possible fonts. The font list it installs is a pretty awesome list of the available source code fonts. It also means you don’t have to faff around installing each of the individual fonts that get included.

  3. Check that the font can be specified in the terminal preferences, re-open your terminal session if you’re missing fonts, so note there could be two options here:
    1. The general powerline font is working in which case you can just use the base font e.g. DejaVu Sans Mono
    2. If you can’t get that working the patched font that you downloaded above should be correct e.g. the equivalent for DejaVu is ‘DejaVu Sans Mono for Powerline’.

Handling the delicate flower of Windows

The Powerline Fonts doesn’t work with Windows so your only choice is to use a patched font. Also bash script to install all the fonts doesn’t work. This means that on Windows you manually have to go into each of the fonts directories and download all the fonts yourself and install them by opening each one in turn.

I downloaded all of the Source Code Pro patched fonts and installed them. Even though you install them as individual fonts they get added to Windows as a single font ‘Source Code Pro for Powerline’ with a separate attribute to specify the weight.

Then add this to your .vimrc:

set guifont=Source\ Code\ Pro\ for\ Powerline:h15:cANSI

If you want to use the ‘Light’ font use this.

set guifont=Source_Code_Pro_Light:h15:cANSI

It doesn’t make much sense as it doesn’t need to include the ‘for Powerline’, but that’s how it works (I figured it out by setting the font in GVim and then using set guifont? to check what GVim used). Also I spotted that when you use GVim to switch the font, the font rendering isn’t very good. I initially discounted the Light font because when I switched using the GVim menu it rendered badly, but if you put the below into your .vimrc and restart GVim it should look lovely.

Also the nice thing is that you can set your DOS/Powershell prompt to the same font.

Tweaking

Once I actually got it working for the first time, it was really disappointing as the icons didn’t fully match up. But as per the FAQ we need to do some tweaking. I started off with Inconsolata as this gives me a consistent font across Windows and Linux. You can install the general font easily on Ubuntu with apt install fonts-inconsolata This is what I got:

enter image description here

The arrows are too large and are shifted up in an ugly manner.

Then I tried all the other default Ubuntu fonts.

Ubuntu mono:

enter image description here

DejaVu Sans Mono:

enter image description here

This has the vertical position correct but the right hand side arrows have a space after them.

Why you use the patched fonts

Using the default fonts relies on the Powerline font to automatically patch existing fonts. However you can improve the look of the airline symbols by using the patched fonts. These are the equivalents using the patched fonts.

I display these all at font size 16 as I like to use a larger font, plus it shows up minor issues.

Inconsolata for Powerline:

enter image description here

This still has issues, but they are almost all solved by the dz variation.

Inconsolata-dz for Powerline dz:

enter image description here

This has a hairline fracture on the right hand side arrows, but is otherwise perfect.

Ubuntu Mono derivative Powerline Regular:

enter image description here

This still has annoying issues.

DejaVu Sans Mono for Powerline Book:

enter image description here

This has a hairline fracture on the right hand side arrows, but is otherwise perfect. I actually prefer it to the Inconsolata-dz as the LN icon is more readable.

On top of these regulars, I tried almost all the available fonts and my other favourite was Source Code Pro.

Source Code Pro for Powerline Medium

enter image description here

This does have issues at size 16 where the arrows are too big, but at size 14 it’s almost unnoticeable. The branch and LN icons do overflow to the bottom, but somehow this doesn’t annoy me.

Source Code Pro for Powerline Light

enter image description here

This almost completely solves the issues of the medium font’s arrow sizes and makes it about perfect, although there’s still the icon overflow.

Source Code Pro

When I was investigating the options for fonts there’s a couple of things you notice, some font patches have the absolute minimum in details, if you compare this to the Source Code Pro list it’s quite significant. Source Code Pro is a very detailed and complete font that has been considered to work in a large range of scenarios. This kind of completeness matters for edge cases.

Used as a patched font it almost perfectly displays the vim-airline bar. The benefit of so many alternatives is the use of the light font which has an even better display of the vim-airline bar.

Source Code Pro is also under continued open development on Adobe’s Github repository.

Self-driving cars should first replace amateur instead of professional drivers

Photo credit: cheers Prasad Kholkute

Professional drivers, i.e. lorry, bus and taxi drivers are under threat from being replaced by computers. Whilst amateur drivers, all the rest of us, feel no pressure to stop.

I want to lay out the reasons why this is backwards. Amateur drivers should be replaced, whilst professional drivers should have their skills augmented.

Once we reach a level of self driving automation where no humans are needed then this piece is no longer relevant, but there is a lot of scope to avoid human deaths before we reach that point.

The trolley problem is not the issue, the drunken humans who think its fun to go racing the trolleys are the problem.

This gets long, feel free to skip ahead to a section. This is based on a European centric view point, most specifically UK and Belgian roads.

  1. Professional vs amateur
  2. Level 3 automation
  3. Safety
  4. Drinking, texting, calling, dozing, tail gating
  5. Crash vs accident
  6. Bus crashes
  7. Solve all bus crashes
  8. Business cost from crashes on the motorway
  9. Lost time to driving
  10. The cost of the dead
  11. Advanced driving licence
  12. Driving freedom
  13. Difficulties
  14. Simulations
  15. Conclusion

Professional vs amateur

Lorry, taxi and bus drivers are professional drivers. They do it for their living. People pay them to drive.

Lorry drivers are paid because they can reliably and safely deliver large amounts of goods over long distances. Billions of euros, every year.

Bus drivers are responsible for transporting 10 – 50 people around each journey. When you consider the value of the cargo, either in human terms or in absolute money ($5m per person) the responsibility is huge.

Taxi drivers are effectively looking after multi-million dollar cargo. They also have vast local knowledge. A special case of this is the ‘knowledge’ that London taxi drivers must pass. Much of this local knowledge is rendered less important by GPS systems, but still the GPS augments the driver’s knowledge.

These professional drivers, drive all day and in all conditions. They have well maintained cars, and make full use of the cars rather than the cars sitting in garages doing nothing.

Professional drivers are the gold standard of driving styles.

Everyone who is not a paid to drive full time is simply an amateur one. They typically drive for the economic benefit (time saving) or convenience/freedom of having a car.

They typically have people in the car who are personally more valuable than taxi passengers as they are likely family or close friends. But the contrast is that they do not think in these terms. They put a sticker ‘baby on board’ in the back window and assume that will solve the problem.

Level 3 automation

The current level of self-driving for cars such as Tesla and GM, is level 3. This is where the car can drive but requires constant supervision. This flies in the face of two issues:

  1. Humans are good at vision but bad at concentration
  2. Computers are bad at vision but good at concentration

With passive level 2 systems in a car, the human uses their vision and the car concentrates for extreme circumstances. Further it helps the human concentration because it keeps them constantly engaged.

With active level 3 systems, the computer is relied on for its bad vision and the human is no longer constantly engaged but expected to keep their bad concentration.

Further there is a major issue of doubt / delayed reactions. With level 2 systems, if the car detects it is about to crash, it does not doubt the outcome and reacts faster than a human could. With level 3 systems if the human detects that the car is about to crash they are in doubt if the computer will avoid a crash and so delay before doing anything.

Safety

Buses in Belgium are 80% safer than cars (0.4 deaths vs 2 deaths per million passenger miles in 2015). But the figure for cars looks better than it is because it includes taxis. Taxis are do many more miles than amateur drivers and are less likely to crash.

In fact the number of bus crashes are so low that we can almost look at the individual cases for the bus crashes. Also note that a single bus crash can kill up to 30 people, so the number of deadly accidents is even fewer.

Lorry, buses and taxis drive more safely. They avoid all the typical causes of crashes:

  • Alcohol
  • Tiredness
  • Texting
  • Calling
  • Speeding
  • Going through red lights
  • Driving erratically
  • Having poor eye sight
  • Driving too close
  • Not changing driving style to bad weather conditions
  • Calmness in an accident
  • Driving without insurance
  • Driving without a licence

Tiredness can be an issue for lorry drivers, but there are strict rules. Automated cars also have these attributes but they don’t have human level performance to handle all situations.

This safety of passengers is the potential route to ending traffic deaths. Once you can look at individual crashes similar as with air crashes then the cause of the crash can be properly understood and recorded. When the number is so many as now aggregate statistics have to be used which will never get the number of deaths down to zero.

Professional drivers can be augmented by the automated driving. Each crash can be analysed and added to the training for the automated car. This means that automated cars will have more knowledge in extreme cases but less in every day. On top of that automated cars can react quicker and without emotion to handle a car that is out of control. Automated systems can be taught to drive at the very limit of the frictional ability of the tires combined with weather conditions.

Drinking, texting, calling, dozing, tail gating

These are the fundamental problems of amateur drivers. Whilst they are warned about the consequences, the chances are always very low of having an accident so there is always the temptation to drive when incapacitated in some way. There is no standard test that can be enforced on drivers before starting. There is some efforts to put in breathalysers in cars, but the chances are so low that these changes will not get to zero deaths. The only stories I have heard about this are repeat drink-drive offenders and taxi drivers. But this still only stops one aspect, drinking. All the other failures of human drivers have no acceptable solutions.

Crash vs accident

When trying to focus on the cost of human life vs the cost of professional drivers, the language needs to change to focus on that practically all crashes are avoidable, they are not accidents.

From this CityLab article when talking about road deaths framing it in terms of crashes rather than accidents gives focus to the causes. Each crash should be treated in terms of an air crash. There are no air accidents, and to get down to zero road deaths, there can be no accidents. An accident isn’t necessarily avoidable, but a crash is.

Bus crashes

Aspects of bus crashes in Belgium are down to single figure deaths per year. When the figures are this low we can consider each case individually. These are some of the causes:

  • Driver become unwell such as a heart attack
  • A tyre blows out on a motorway cross over and drives off the side

Both these cases would be better handled with automated assistance.

If the driver takes their hands off the wheel the computer can take over and bring the bus to a stop at the side of the road.

If an extreme event such as a tyre blow out or hitting a large patch of ice, the computer is able to be better trained. Simulations can be replayed millions of times and the raw physics of these situations can be well analysed.

Solve all bus crashes

Each of the situations that caused a fatal bus crash can be analysed and simulated. Then further with simulations the environment can be altered to train the car on other similar situations.

Aircraft pilots train in a similar way. This is the best possible circumstances for training drivers. There is such a long history of bus crashes and the numbers are already so low that there is a realistic chance of preventing all bus deaths across Europe.

But all of this is humans being augmented by computers. They are treated as the safety belt, there to catch exceptional circumstances. All the while computers can be learning from the bus drivers. Especially if the bus drivers have taken their advanced driving test then computers are learning the most consistent and safe method of driving.

Business cost from crashes on the motorway

One of the potential but more extreme solutions is to ban amateur drivers from the motorway. It could be weakened to allow amateur drivers with an advanced license on the motorway.

One simple argument for this is the business cost of the delays that crashes cause. One crash delays thousands of people and lorries. The ring around Antwerp is a major example.

Reducing road deaths on the motorway to zero is within reach. Reducing the bad drivers on the motorway has a multiplying effect. It takes two bad drivers to crash. One who makes the mistake and the other that is too close.

There are some major issues with this solution:

  • It will force bad drivers off the motorway and onto the normal roads. This will increase the death rate off the motorway
  • Policing this will be a problem. A simple possibility is to have a letter in the car windscreen for all those with an advanced licence. Effectively a reverse learner sticker

Lost time to driving

Professional drivers lose no time when driving a car, bus or lorry. There is no other work. They lose the time driving to their work. Effectively during this period they are amateurs too.

But all amateur drivers are losing time. Perhaps it is more pleasurable than being stuck on a train. But it’s wasted time.

The first area where level 4 (fully autonomous) vehicles will become a reality will be on motorways. It would be possible to allow self-driving whilst on a motorway but switch it off once the GPS detects that the car has left the motorway. This would cut down on motorway deaths, save businesses money from less delays and give more free time to commuters.

The cost of the dead

Those who die in car crashes are a specially tragic kind of death. They almost always have nothing wrong with them. They are mentally and physically healthy. They are also often young and espcially cruel when it is young children, for example pedestrians.

The economic cost is put at $5m per person based on the amount of output that an average person can have. But the wasted effort is bigger when the person is killed younger. All the training and education has been given but they never get a chance to repay that back into society.

But that does not take into account of the destroyed lives of the families of the victims. Parents who lose their children, siblings who lose their brothers and sisters, children who lose their parents. Further the economic cost of the victims families who struggle to work for years after the death.

Why do people not feel/see this pain? We stick horror photos of smokers on packets, but nothing of the crash victims on cars.

The biggest insult is that of drink drivers. The thought of having your child killed by a drunk driver. Death by human stupidity. Not someone evil just someone stupid.

No other amateurs can do such damage. Professionals practically never do this. There will be cases, but the cases are so few that it is at a level at which no more can be done.

Advanced driving licence

One potential improvement is to increase the requirements of the driving tests. In the UK there is the advanced driving test, which tests candidates in many extreme situations as well as increased road safety and traffic awareness. The insurance premium for advanced drivers is lower.

A further possibility would be to require new driving tests every 10 years and requiring the level of advanced driver.

These tests should also be mandatory for professional drivers but the hope is though that it will be easier for them to pass and they will get the added benefit of lower insurance premiums when driving for themselves.

This will smooth the transition to automated cars as humans will behave at a level that is closer to professional drivers.

Driving freedom

The major issue with restricting amateur drivers is the freedom and reliance we have on cars.

If we were to restrict who can go on motorways it would restrict poorer people unfairly. They cannot afford to buy expensive automated cars.

Is there research of social status with driving deaths?

But we are not completely restricting driving just on motorways for those without an advanced licence. So the freedom is still there just a bit slower.

Plus retaking your licence every 10 years.

But it is restricting a freedom that exists now. People are happy to accept the deaths for the freedom.

But commuters don’t need the freedom. Take it away from the rich with their business cars with cheap taxes. Commuters could take taxis and buses. Introduce toll roads, the French payage system is perfect. Hike the prices during commuter times but make exceptions for professional drivers.

Difficulties

There was a bus lane in London that received a lot of complaints and it was eventually stopped because it was too much of a political issue. In Belgium, bus lanes still exist in the slow lane. But it could be a professional lane which means lorries as well. This means that the lane would be fully used.

It would be very unpopular though.

The ultimate is to prevent amateur drivers from using the motorway. But you can’t prevent foreign drivers. You could restrict them to lorry speeds unless they have a pass. The car pool lane, but now the professional or advanced drivers lane.

Bringing in an advanced test and also a 10 yearly driving test. Then you can force up the level of driving. But then the whole population must do a driving test. The system can’t cope now. There’s not enough driving instructors.

Simulations

The automated test could be increased. Along with sight and danger tests.

Also a focus on time, keeping the 3 second rule, increasing it in rain and ice.

We have a limitation of driving instructors, but driving simulations can be drastically improved. There are very realistic driving simulation games with highly accurate physics. These games are played by children but with no emphasis on them being a useful tool.

This is how pilots are trained, they do thousands of hours in a simulator, repeatedly learning disaster scenarios.

All drivers could be put through multiple simulations of crash scenarios or taking on a skid pan as would happen with a regular advance driving test. They could drive around virtual driving cones until such time as they can master it. Mastery of the situation should be the key.

Currently it is very costly to take a driving theory test. They could be made harder but made a fixed price that allows as many retakes as required. This is a similar concept as put forward for learning with Khan Academy, mastery is the importance, getting 100% on the test whilst allowing as many retakes as required.

A further benefit is that the data collected by the driving simulations can be used to train the AIs. To see how a human handles a crash and also gain insight into the multiple attempts at the same crash and see which methods can be used to avoid a crash.

Conclusion

The focus should be on improving and replacing amateur drivers whilst augmenting professional drivers. This is a highly unrealistic hope as the focus for self-driving car companies now is simply to cut the human cost of drivers. But the human cost of death should be regarded as a higher priority, with a requirement for public policy to intervene.