Why functional programming?

This is a post to discuss the benefits of functional programming (FP) for those who don’t use functional programming and I guess most relevant for full stack developers – you love your JavaScript and your SQL children equally.

tl;dr: FP combines well with SQL, is easy to learn like Excel, is declarative, makes you think of data as a whole, is backed by mathematics and (IMHO) most importantly has less bugs (which has a compound interest style benefit).

I was reading a functional programming blog post. It’s excellent and teaching me things. As someone already familiar with functional programming I love it. In no way do I want to slander the post. I still feel like a very basic functional programmer and that post explains beautifully, without using libraries, how I can take my functional programming a step further. However I would argue it struggles to convince why someone should start to use functional programming.

Here’s three sections of talking about the hard-to-see benefits of functional programming (emphasis mine):

We could then put those together with pipe() like this:

const comments = pipe(commentStrs,
    filter(noNazi),
    take(10),
    map(emphasize),
    map(itemize),
    join('\n'),
);

If we squint a little, our pipeline isn’t so different from chaining array methods:

const comments = commentStrs
   .filter(noNazi)
   .slice(0, 10)
   .map(emphasize)
   .map(itemize)
   .join('\n');

Now, someone may feel that the array method chaining looks a little cleaner. They may be right. And someone else may even be wondering why we’d waste time with pipe() and those utility functions.

later on:

This [non functional block of code] code is fine. For lots of people, it’s going to be familiar and readable. It accomplishes the same result as the composed version. Why would anyone bother with pipe()?

and in the conclusion:

Now, you may not care one whit about functional programming. That’s fine. But using pipe() opens up a whole new way to structure programs. With statements, we write code as a series of instructions to the computer. It’s a lot like a recipe in a cookbook. Do this; then do that; then do this other thing. But with composition, we express code as relationships between functions.

I think these arguments for functional programming can be simplified and strengthened.

Declare me a cookbook (its cookbooks all the way down)

Functional programming as I understand it is described as being like a cookbook. It’s declarative – mix this, bake that. You take the output of one part of the recipe and apply it to the next part. You don’t work on protein by protein of the egg, you whisk the whole thing. Maybe you separate the white and the yolk, but you’re always dealing with things as a whole.

Once you start selling functional programming to people as ‘express code as relationships between functions’, I think you have lost. Either they understand what you mean, which means they already are sold, or they don’t understand in which case you will never sell it to them.

Composition encourages us to think about code as relationships between expressions … As a result, our code becomes more declarative.

So ‘relations between expressions’ gives us more declarative code.

I always relate declarative code to SQL. In SQL, the CURSOR is the only available loop and it is seen as a code smell. WHERE (filter in FP language), aggregation functions like SUM (reduce in FP language) and any transformational functions like CONCAT in SELECT (map in FP language) are exactly what coders understand and use naturally.

How I think about functional programming is dealing with your whole group of data as one block.

Not row per row, but tables/sets at a time.

Excel

I also come back to Excel. Excel has been used by millions of non-coders to code. It is the most powerful and most used computer platforms.

If non programmers can understand how powerful Excel is then programmers should understand how powerful Excel is without borders.

I also stick to the power of just writing a simple function:

function double (x) { return 2 * x; }

As opposed to wrapping that function up in a class, or just writing 2 * myVar everywhere. It is a nice balance between DRY and only getting the banana (not the monkey and the rain forest).

1000 years of proofs

The other reference point I keep coming back to, is that functional programming is backed by 1000 years of mathematics. Mapping one set of data onto another is a beautiful mathematical concept. Computer science has 70 years of history, mathematics has thousands of years.

Functional programming came about by mathematicians and computer scientists discovering things at the same time from different angles.

Most people will never have heard of the Church-Turing thesis. In your typical OOP course, frankly in my entire Mathematics and Computer Science degree I don’t remember talking about the Church-Turing thesis, even though we did have a course on Haskell. It was only years later, I think as a part of various links that I came across from Hacker News that I learned more about Haskell, Category Theory and the Church-Turing thesis.

What is the Church-Turing thesis? Here’s a summary (cheers ChatGPT):

It states that any function that can be computed by an algorithm can be computed by a Turing machine. In simpler terms, it suggests that any problem that can be solved by a step-by-step procedure can also be solved by a computer program.

Written in another way – if you can write an algorithm in your programming language, then the computer can solve it. Coding is only so popular because we know it can be calculated. Otherwise it would just be a research topic.

This applies to regular procedural programming as much as it does to functional programming. But if we go Wikipedia we get:

In 1933, Kurt Gödel, with Jacques Herbrand, formalized the definition of the class of general recursive functions: the smallest class of functions (with arbitrarily many arguments) that is closed under composition, recursion, and minimization, and includes zero, successor, and all projections.

This is one of three definitions in the Church-Turing thesis that were proved to be equivalent. The other two definitions were by Alonzo Church (λ-calculus) and Alan Turing (Turning machines). But interestingly this definition is much more relevant to functional programming.

When you are composing functions (passing the result of one function to another), when you are doing function recursion you are standing on the shoulders of Kurt Gödel, whilst Church and Turing look on approvingly. You have a clear, transparent path to proven methods.

I feel their presence in the room when I’m coding. If you’re doing OOP, maybe you think you have the approving gaze of Alan Kay (the originator of OOP concept). But if you’re using Java, C# or C++ (and definitely if you’re using classes in JavaScript), he’s really not a fan:

the C based languages that have been painted with “OOP paint”.

Alan Kay comment (2010)

Start the FP journey

But also, even at the most basic level, replacing loops for map, filter and reduce can be done easily. This is how I taught myself functional programming using general purpose languages. You don’t need to use compose, pipe, flow as mentioned in the blog post. You don’t need monads.

What you need is:

  • pass functions as parameters
  • return functions
  • map
  • filter
  • reduce

That will get you enough for years of coding. You can learn things like flatMap as a next step.

I’m fine if people write lots and lots of nested functions. It may be ugly but it’s easy for anyone who isn’t familiar with functional programming to read.

compose, pipe, flow similar to arrow functions can all make your code more powerful. If you have a culture that understands functional programming deeply then they are great to use. They’re the next step up. What is great is that there are many more levels up that you can go – category theory is always there waiting for you to learn about. It’s amazingly powerful, but you don’t need it.

Conclusion

So I like functional programming because:

  1. Its declarative and so my SQL and my regular code become more similar
  2. Coding declaratively is “good”
  3. My thinking about my code acts on data as a whole, so my SQL thinking of sets and tables becomes more similar
  4. Thinking about data as a whole is “good”
  5. 1) and 3) mean that I can apply the same thinking and coding all the way through my programming stack. A trick I learn in the database can be applied in the front end.
  6. Its mathematical and so the concepts I use have been proven to work. I can’t be sure we’ll be using for and while in a 1000 years, but map, filter and reduce will still be there

Finally and for me most importantly is that I believe I have less bugs in my code. Removing just the simple off by one error in code has a compound interest style impact on the rest of my code. I have less places to look for bugs in my code, so I spot bugs faster. I spend less time fixing bugs and so less time having to refactor. Bugs get harder and take longer to fix the older they are. I spend more time coming up with new code, thinking about the data and solving the problem better on the first attempt.

I spend my days building and not repairing.

Composable

What does it mean to say that something is composable in JavaScript? These are some notes after watching the funfunfunction Promises video.

Callbacks are not composable, but Promises are.

My simple understanding of composing two functions is just calling one function inside another:

const f = x => x*x
const g = x => x*2

console.log(g(f(1)))

The example compares callbacks:

loadImageCallbacked('images/cat1.jpg', (error, img1) => {
    if (error) throw error
    addImg(img1.src)
    loadImageCallbacked('images/cat2.jpg', (error, img2) => {
        if (error) throw error
        addImg(img2.src)
    })
})

vs Promises:

Promise.all(
    loadImage('images/cat1.jpg'),
    loadImage('images/cat2.jpg')
).then(images => {
    images.map(img => addImg(img.src))
})

Or in individual terms, we can return a promise from each then call until we’re done:

loadImage('images/cat1.jpg').then(img => {
    addImg(img.src)
    return loadImage('images/cat2.jpg')
}).then(img => {
    addImg(img.src)
    // it's a good habit to always return something from a Promise
    return null
})

So a callback in the simplest terms is a function g that just calls the callback f that is passed in:

const f = x => x * x
const g = (x, f) => {
    f(x)
}
console.log(g(1, f))

Now compare this to composable functions: g(f()). We are not passing the output of f to g, we are passing f itself. The console.log will output undefined because g doesn’t return anything, it could but that’s not typically how we use callbacks.

So now at least it makes sense to me why callbacks aren’t composable.

VSCode Neovim setup

I recently switched over to using the vscode-neovim extension for VSCode.

What wasn’t obvious though was how to get a plugin manager and plugins / vim customisations working.

I’ll add here quickly my plugins setup as I’m using vim-surround, and a bunch of Tim Pope plugins:

I’m on Linux / Ubuntu 22.04, with nvim installed via apt with ppa-neovim/unstable – you need this to get the current required nvim v0.8+.

I moved from Vim to Neovim and I still use my .vimrc and amazingly this kind of setup seems to work with vscode-neovim too.

As per the :help nvim-from-vim have the following ~/.config/nvim/init.vim:

set runtimepath^=~/.vim runtimepath+=~/.vim/after
let &packpath = &runtimepath
if exists('g:vscode')
" VSCode extension
source ~/.vimrc.vscode
else
" ordinary Neovim
source ~/.vimrc
endif
view raw init.vim hosted with ❤ by GitHub

This happily loads my ~/.vimrc file for terminal nvim and then loads ~/.vimrc.vscode for vscode-neovim – which also appears to load:

  1. It sets my leader to <space>
  2. It loads Plug
  3. It loads my custom key mappings
  4. It loads vim-surround and and vim-repeat
    1. I know it loads these because I have a command map <leader>' ysiw'
    2. This puts single quotes around the current word
    3. This works (so vim-surround works)
    4. I can then ‘repeat’ my adding quotes with . which means vim-repeat is working in combination with it.

I have massively stripped my .vimrc file down and renamed it to ~/.vimrc.vscode:

let data_dir=has('nvim') ? stdpath('data') . '/site' : '~/.vim'
if empty(glob(data_dir . '/autoload/plug.vim'))
silent execute '!curl -fLo '.data_dir.'/autoload/plug.vim –create-dirs https://raw.githubusercontent.com/junegunn/vim-plug/master/plug.vim&#39;
autocmd VimEnter * PlugInstall –sync | source $MYVIMRC
endif
set nocompatible
call plug#begin('~/.vim/bundle')
Plug 'tpope/vim-sensible'
Plug 'tpope/vim-surround'
Plug 'tpope/vim-unimpaired'
Plug 'tpope/vim-repeat'
Plug 'tpope/vim-commentary'
call plug#end()
nnoremap <space> <nop>
let mapleader="\<space>"
let g:mapleader="\<space>"
try
autocmd FileType php setlocal commentstring=\/\/\ %s
autocmd FileType javascript setlocal commentstring=\/\/\ %s
autocmd FileType reason setlocal commentstring=\/\/\ %s
autocmd FileType dosbatch setlocal commentstring=rem\ %s
autocmd FileType rust setlocal commentstring=//\ %s
catch
endtry
map <leader>' ysiw'
map <leader>" ysiw"
map <leader>; A;<esc>
map <silent> <leader><cr> :nohlsearch<cr>
nnoremap <leader>bb :buffers<cr>:b<space>
map <leader>s :w<cr>
view raw .vscode.vimrc hosted with ❤ by GitHub

Some interesting things that do work:

  • vim-plug works!
  • Basically it’s just really really cool that you can use a .vimrc
    • This means almost tons of remaps should work
  • :buffers works but it’s kind of ugly
  • :nohlsearch works (removing the highlight)
  • :vsplit works and moving with <C-w>[hjkl]

Lots of things don’t work like:

  • I got a conflict between my vim-plug config for VSCode and for neovim
    • By running :PlugClean inside VSCode this wiped all my neovim vim-plug directories
    • Effectively I think this means its better to maintain your plugins via neovim
    • Then use a subset within VSCode
  • preview, quickfix and location windows
  • I don’t think :windo diffthis works
  • You have the same problem as with VSCode Vim that undoing all changes doesn’t get rid of the file ‘dirty bit’ so you have to save it to fully undo
  • :bd doesn’t work as you expect – you need to use :q instead
  • I found the :e <type file path> didn’t work with auto-complete beyond the current directory – just use Ctrl + P instead.
  • There is no command history – you can’t use the up arrow to go to previous ex commands that you typed – I’m surprised about this so maybe there is a config setting somewhere Ctrl + N/Ctrl + P are used instead of up/down

I’ll try to list more as I go further.

For a couple of hours spent setting it up and to have 90% of the plugins I need working is really great.

Until I can get fugitive and vim diffs (in combination with fugitive) working I will still want to use terminal nvim, but that fits quite easily into my coding process for now.

Javascript REPL

I really like the simple ‘console’ REPL that you have in a browser debug tools. I wanted to recreate this in VS Code.

tl;dr Debug console

It turns out after lots of experimenting this can be also done with the ‘Debug Console’ when you start a debug session – see VSCode debugging.

This is exactly what I wanted.

However I had tried a whole bunch of REPL plugins first.

Node.js Interactive window (REPL)

https://marketplace.visualstudio.com/items?itemName=lostfields.nodejs-repl

This is also a nice solution but seemed lacking compared to the browser console because you can’t inspect the objects so easily. But I guess it’s more like a standard REPL.

One problem is that it hasn’t been developed for years.

You have to disable eslint at the top

/* eslint-disable */

This seems more like a repl than the playground – it evals immediately

Interactive Javascript playground

https://github.com/axilleasiv/vscode-javascript-repl-docs

Very similar to the above plugin – neither is particularly actively maintained

This does seem to be very nice though.

You have to disable eslint at the top

/* eslint-disable */

This allows you to do more like a playground where you can eval immediately or just eval with //= comments after a line

You can run a REPL on a markdown file too: https://github.com/axilleasiv/vscode-javascript-repl-docs/wiki/Markdown-code-blocks

Code Runner

By far the most installed plugin is https://github.com/formulahendry/vscode-code-runner. This works for any language but has support for Node.

It’s kind of ugly though and it’s more like a SQL prompt where you can run a line or two of code and get output.

Ramda

https://ramdajs.com/repl/

This seems really interesting – you can install it locally too. This is probably the closest that I am looking for.

But you have to do quite a lot of installation locally.

Jupyter notebooks

Initially I was left cold by the REPLs as I didn’t have much access to the values – I liked the interactivity of the browser console.

My next step was to try Jupyter notebooks with a node.js kernal.

This led me to: https://github.com/DonJayamanne/typescript-notebook

This recommends installing Jupyter which adds a whole ton of stuff to VS Code but it all seems quite cool.

You can start in pseudo REPL mode where you just keep running a single line of code.

Or you can properly generate a ‘node notebook’ *.nnb file that you can use as a scratch pad. This then gives you full notebook support – this might be interesting for debug logs. I could create a JIRA-XXX.nnb notebook for each issue – then attach this to the Jira issue. Similar to the vim logs.


Note: You need to disable Vim mode for the nnb file

Then you have nice shortcuts you can use:

  • b – create new run section below
  • enter – focus in the section (like INSERT mode for Vim)
  • esc – lose focus on the section (like NORMAL mode for Vim)
  • shift+enter – run the section
  • j,k – move up and down sections
  • dd – delete section

It also nicely shows up functions that you have created inside the notebook in the intellisense.

VSCode debugging

What I don’t think a lot of VSCode users are used to is constantly living with the debugger turned on. Most web devs grew up in a world without Visual Studio, so console.log is the norm. However anyone who used Visual Studio for Visual Basic or C# development will know the power of constantly using and setting break points in your code. I’ve long ago stopped using Visual Studio, but here’s rough notes of what I did to bring debugging love back.

The debugger attaches a firefox/chrome extension to a running server and debug session.

So you must:

  1. npm run dev
  2. npm run dev:workers (in packages/server)
  3. Start debug session via VS Code (see below)

No more logs

So trying to get VSCode debugging working again so that I don’t keep pissing around with console.log.

Following on from Speedy deployment.

I have the Firefox debugger extension installed.

I now make sure I add things to my .vscode/launch.json so that I have them in the future for when I forget.

Here is the launch.json configuration:

{
    "version": "0.2.0",
    "configurations": [
        {
            "name": "Launch index.html",
            "type": "firefox",
            "request": "launch",
            "reAttach": true,
            "file": "${workspaceFolder}/index.html"
        }
    ]
}

Note: that is attaching to a file – we want to attach to the next.js running URL

You can also create something very similar by clicking the ‘Add configurations’ button at the bottom when you have the launch.json file open.

Now we’re using next so need to hook into that. There are quite a few tutorials for this:

For the Next.js specific launch.json:

{
    "version": "0.2.0",
    "configurations": [
        {
            "type": "firefox",
            "request": "launch",
            "reAttach": true,
            "name": "Next: Firefox",
            "url": "http://localhost:3001",
            "webRoot": "${workspaceFolder}"
        }
    ]
}

Note the url and webRoot settings.

Setting up Firefox to connect

You need to enable remote debugging.

Now you should be able to the ‘Run and Debug’ side bar and run the “Next: Firefox” as in the name setting above.

I added a breakpoint to a file, which showed up as a grey circle next to the line numbers. I was expecting a red filled circle. When you run the debug session the breakpoint doesn’t stop there. The error comes down to the path settings because next.js isn’t running at the root of the workspace:

Screenshot from 2022-07-18 18-05-20.jpg

See the notification in the bottom corner:

Screenshot from 2022-07-18 18-15-45.jpg

This is similar to the dev.to Bonus: If your Next.js code is not at the root of your workspace section, but here we have pathMappings instead of sourceMapPathOverrides.

Note: I adjusted the webRoot to include the packages path after fixing this for Chromium, but it still requires the pathMappings.

I clicked on ‘Yes’ in the above message. It then created the pathMappings launch.json as:

{
    "version": "0.2.0",
    "configurations": [
        {
            "type": "firefox",
            "request": "launch",
            "reAttach": true,
            "name": "Next: Firefox",
            "url": "http://localhost:3001",
            "webRoot": "${workspaceFolder}/packages/client",
            "pathMappings": [
                {
                    "url": "webpack://_n_e",
                    "path": "${workspaceFolder}/packages/client"
                }
            ]
        }
    ]
}

Now the break point within VS Code is a red filled circle and the code correctly breaks on it.

You can do the same in the Firefox dev tools – I actually think I prefer those dev tools for debugging – but then you’re separated from the code.

Setting up Chromium to connect

Now we should be able to do the same for Chrome.

My linux setup doesn’t use regular Chrome, but flatpak Chromium instead. So the second configuration below is much easier.

Note: I found the ${userHome} variable from https://code.visualstudio.com/docs/editor/variables-reference

Similar to Firefox when I had to set a pathMappings config, for Chrome its easier because you just set the webRoot to the packages/client directory. This should work for Firefox too – but doesn’t :woman_shrugging: :woman_facepalming:

{
    "version": "0.2.0",
    "configurations": [
        // This is runtimeExecutable is mildly crazy because of flatpak
        // Normal chrome installs won't need runtimeExecutable
        {
            "type": "chrome", // must be chrome
            "runtimeExecutable": "${userHome}/.local/share/flatpak/app/org.chromium.Chromium/current/active/export/bin/org.chromium.Chromium",
            "request": "launch",
            "name": "Next: Chromium (flatpak)",
            "url": "http://localhost:3001",
            "webRoot": "${workspaceFolder}/packages/client"
        },
        // Regular Chrome
        {
            "type": "chrome",
            "request": "launch",
            "name": "Next: Chrome",
            "url": "http://localhost:3001",
            "webRoot": "${workspaceFolder}/packages/client"
        }
    ]
}

Debug console

Once you have a debug session running, you can access the tab ‘Debug Console’ in the output panel (the same panel as the terminal).

This then gives you what seems to be a perfect console session as you get within a browser.

When you hit a breakpoint you can refer to the locally defined variables and inspect them.

A Happy Illustrated Guide to a PhD

Toeholds

Diego posted a link to Matt Might’s article “The Illustrated Guide to a PhD“, which was funny, sad, and many would think is accurate.  Since I feel optimistic today, I would like to extend on that with an encouraging note.

While we’re all familiar with “perfect” objects, like a circle, triangle, or square:

…they’re not the only class of object that can exist.  “Ah!  I know what you mean”, Kate would say, “but when you look close enough, everything else can be described as a combination of these elements!”  And she’d point to this picture of a house.

And if this is true and all that there is, pushing the boundary of any shape will give us the same inconsequential bulge:

Viewed in this light, the years of anxiety while doing a PhD suddenly become even more of a

than it already is. :sadface:

But there

View original post 458 more words

‘Science is not for scientists, but for the world’

This is an article from a Belgian magazine, Knack – all copyright is theirs. I’m just posting it here, because Hannah Arendt (“The banality of evil“) is a fascinating person and I think this article is important enough to be shared in English.

https://www.knack.be/nieuws/belgie/wetenschap-is-er-niet-voor-wetenschappers-maar-voor-de-wereld/article-opinion-1807221.html

‘In times of rapidly increasing polarization, it is especially important not to rely on ‘gut feelings’, but to base yourself as much as possible on findings from high-quality research’, write Rectors Caroline Pauwels (VUB), Herman Van Goethem (UAntwerp), Rik Van de Walle (UGent) and Luc Sels (KU Leuven). In this contribution they explain what the Hannah Arendt Institute stands for, after the N-VA and Vlaams Belang called on the Flemish Parliament to stop the subsidies.

‘Science is not for scientists, but for the world’
Hannah Arendt

In the Internal Governance, Equal Opportunities and Civic Integration Committee, in addition to much praise, the Hannah Arendt Institute was hit hard by Flemish representatives Nadia Sminate (N-VA) and Sam Van Rooy (Vlaams Belang) yesterday. During the presentation of the functioning of the institute, a press release was sent out in which Nadia Sminate (N-VA) described the institute as “a glorified communication agency of the left-liberal vision of urbanity and citizenship” and called for subsidies to be stopped. As rectors of the four universities involved in the Hannah Arendt Institute (VUB, UAntwerp, UGent & KU Leuven), we regret this tendentious description of the Hannah Arendt Institute and we believe it is of great importance to clarify what the Institute stands for.

Academic citizenship

Science is not for scientists, but for the world. That is why universities try to actively disseminate their knowledge and insights. With the Hannah Arendt Institute, for example, we want to inspire professionals, policymakers and citizens to get started with scientific insights into diversity, urbanity and citizenship. A solid scientific basis also contributes to a thorough dialogue with such themes.

Science is not for scientists, but for the world.

New scientific insights can offer great social added value. In the behavioural and social sciences, the path from science to policy development and concrete practice is often difficult and long. Knowledge in such disciplines too often remains underused, perhaps because its application sometimes bumps into ideological walls, preconceptions or even prejudices. We consider it our task to draw attention to that knowledge. The Hannah Arendt Institute has this ambition to disseminate knowledge outside the university walls, more specifically on themes such as diversity, urbanity and citizenship. Because that knowledge matters. Because she is important.

Need for progressive insight

Just with ‘difficult’ social themes, there is a great need for progressive insight. In conversations about complex topics such as migration or free speech, disinformation and ideological bias all too often prevent the possibility of arriving at a workable solution. In that case, sound empirical findings are a good basis and an opportunity to find each other. The opposite is also true. A lack of scientific diligence and in-depth knowledge is a breeding ground for toxic polarization. Hannah Arendt spent her whole life trying to understand what is incomprehensible, because we ‘have our noses at it’. She asks us to think about what we are ‘doing’. She passionately advocated ‘thinking for oneself’, sometimes against friends and prejudices. Advances in scientific understanding and permanent dialogue help us do this.

The social need is there. The scientific research results are also there. Together with the Hannah Arendt Institute, we are working on valorisation: making scientific insights valuable for society. At the Hannah Arendt Institute we provide citizens with knowledge and research results through podcasts, videos and events. We reach professionals with targeted training. We address policy makers through reports and round tables, always with an openness to dialogue and respectful debate. Because especially in the hands of those groups, the knowledge becomes impactful.

With the Institute, we expressly reach out to civil society and to citizens who are ‘in practice’ and who want to make a constructive contribution to the development of society. We can learn from their findings and thus respond to their questions for further research.

The Hannah Arendt Institute is a link between university and society. The Institute’s employees are investigating how you can strengthen the social fabric in sports practice. They investigate how polarization, disinformation and hate speech influence our reference and action framework and advise local authorities and other government services to respond appropriately. They pool knowledge about how cities and municipalities deal with today’s complex challenges and help build communities of practice to inspire other cities and let them learn from each other.

Gut feeling or science

Does the confrontation with a new insight sometimes hurt? Yes, and changing your mind isn’t always easy. It grinds and sometimes pulls to adopt a new idea, change policy, or try out a new method. In times of rapidly increasing polarization it is extra important not to rely on ‘gut feelings’, but to base yourself as much as possible on findings from high-quality research. Universities should not sit back and counter disinformation and feed society with the also constantly evolving insights from scientific research.

With an organization such as the Hannah Arendt Institute, we take on that responsibility to bring our knowledge to the world, for everyone who wants to use it. Of course, this knowledge cannot be taken or abandoned. It is the basis for dialogue, debate and practical training. The Institute is, as it were, an academic form of citizenship. We want to look beyond ideological boundaries and come up with solutions together with other social actors. The world needs that more. Not less.

Fumblings about Compassion

“One with compassion is kind even when angry

one without compassion will kill as he smiles.”

Shabkar – Tibetan poet

I’ve re-posted this here now: https://shittyphilosophy.wordpress.com/2022/05/19/fumblings-about-compassion/

I went to Krakow, Auschwitz I and Auschwitz II – Birkenau. It was surprising how un-horrifying it was. Birkenau was certainly bleak but its very hard to picture in my my mind the terror of the victims, even as you walk along the same path they do to the gas chamber.

What stuck in my mind was the Nazi doctors and in particular Josef Mengele who would smile and whistle whilst they went about selecting people for the gas chamber. This was not to make me think about I hate them, but to view them as humans who had no compassion at all towards the Jews and the other victims in the camp.

Rather than look at whether people are good or bad, I think that we can look at humans along a spectrum of how much compassion they show. Even this gets harder to judge because Hitler was a supporter of animal welfare (assuming it wasn’t all Nazi propaganda) and was very fond of dogs, so he certainly had compassion but none for the Jews. So compassion alone is not enough, selective compassion can be just as bad.

Perhaps a better way to look at it is having compassion for things we hate or don’t care for. This becomes a clearer way of separating ourselves and our future selves from those who try to kill under the idea of some ‘just’ cause.

Taking this as far as I can imagine matches the Buddhist criteria for having compassion for all things that can suffer. To prevent the future horrors of another version of the Nazis, our compassion has to include all humans whatever they have done. Whatever our feelings are for others we must have compassion for them.

This is where I see a difference between the Christian “Love thy neighbour” against the Buddhist “remove suffering for all that can suffer”. Love is more powerful than compassion and I appreciate that part of Christianity more and more. But I want to prevent future wars and “Love thy neighbour” is too open to cynicism and misinterpretation. Christianity is too often twisted to fuel wars. Buddhism appears to be a more humble attempt to get people to change the way they think.

So I see space for having a Buddhist focus of compassion for all living creatures (or all that can suffer) and then keep moving towards the Christian ideal of love.

What if the trees were the real angels?

I’ve re-posted this here: https://shittyphilosophy.wordpress.com/2022/05/19/what-if-the-trees-were-the-real-angels/

What if the trees were the real angels?

This is what I sat to think looking at this war memorial angel statue in Lier.

Turns out there is a name for this – Pantheism.

But my interpretation is roughly that Nature and God are the same thing. It’s reasonable to me to say “Nature created humans”.

Translating Sublime Text into Vim

Intro

Coming from a Windows world getting into Vim, to me is almost exactly like the struggles I had learning French or Dutch. I spent 10 years learning French growing up and I can’t speak a proper sentence. I then moved from England to the Dutch speaking part of Belgium (Flanders) and I learnt to speak Dutch to a conversational level within 2 years.

If you’re going to learn Vim you need to immerse yourself in it. I suspect the majority of Vim users only ever use it to make minor file modifications via SSH. That’s what I did anyway.

I’ve used lots of editors in Windows but the one I prefer now is Sublime Text (ST). However ST has almost all the exact same commands as other editors, with the one major improvement which is Ctrl+P, we’ll come to that later. ST is free to use with a popup once in a while, its a great tool, you should buy a licence.

So for users of all other editors, all you have to do is learn the elements of Sublime Text I use here and then you should be able to translate them to your own editor. I hear you notepad lovers. So we’ll use ST as the boundary layer between our nice fuzzy Ctrl + NCtrl + CCtrl + V and :ey and P.

Why O Why

Is it worth the pain? I have spent in excess of 100 hours doing nothing but learning Vim and getting it set up the way I want.

I mean, the question ‘How to close Vim?’ has a 1000+ up votes on Stack Overflow. That’s insanity.

However, I think that if you master Vim the layer between your thought and your code becomes thinner. So this has nothing to do with linting or plugins this has to do with performing at a higher level with the code that you write.

Also I think Vim is just misunderstood. This is where my analogy of learning a spoken language comes from. Switching between Windows editors is like switching dialects sure some of the Scottish folk sound funny, but you can understand what they say. All of us assume that Vim is just another dialect, but it’s not. It’s like nothing you’ve used before. So there’s nothing for your brain to grab on to and understand.

  1. Vim gets you closer to your code. Once performant in Vim you can perform code editing tasks faster and keep up with the speed of your thought. This breaks through the ceiling that you will hit with most other GUI editors.
  2. Vim is very fast, I don’t think it’s necessarily faster than ST, but certainly it’s at that level, everything happens instantly. There’s none of the delay that you sometimes have with opening ST or other heavier editors e.g. Netbeans, IntelliJ. Speed is one of the barriers between your thought and your code, slow editors are slowing you down.
  3. Vim is ‘hyper’ cross-platform (Windows, Mac, Linux, SSH, Docker, Browser (via WASM), Android, Amiga …) and works via the command line, every benefit you learn on Windows means that you have the multiplied benefit that you can use the exact same instructions on Linux or on Mac. Again so it ST, but Vim works via SSH, it works in Docker, it works everywhere.
  4. Once you learn the commands you can do things quicker, deleting a word is just typing dw, once fluent this can be performed faster than using the mouse or Ctrl+Shift+Right-arrow, Delete. These are small 1% improvements, but they add up.
  5. Vim has a command history. This is really useful for doing repeated things. Sure your search box in other editors has it plus you’ll have recent files that you’ve opened, but every single command that you type is recorded. My example for this is reformatting code with Regex. Once you’ve closed your regular editor your search and replace history is lost. In Vim it’s there waiting for you.
  6. Not only that but anything you did as the last command can be repeated with .. This can be complex things like repeating all the text you just typed in Insert mode. Or if you just cut a line and you want to paste it a few times, now you’re just typing . instead of Ctrl + V.
  7. Syntax highlighting! In the DOS prompt, SSH prompt! Seriously, this is amazing. Windows has been around for 30 years and there’s nothing else I know of that can give you this.
  8. Less Chrome. Vim is mostly like using the distration free mode of Sublime Text all the time. Less distraction, more thinking.
  9. Everything you’ve got in your editor currently: Tabs, Split screen, Projects, Linting, REPL, Plugins, Sidebar file tree. But we’re still in the donkey DOS prompt here.
  10. Closer to the command line. Again thinning that barrier between you and your code. In Vim you can type ! and then any command line command, e.g. !mkdir temp or !python will allow you to drop into the python REPL and then come straight back to Vim once you’re done.
  11. Vim’s buffers, which are the equivalent of tabs in other editors, are amazing. When you have a regular editor open you’ll typically have 10 or so tabs open, or at least that’s what I had as otherwise it becomes too crammed. With Vim you just keep opening all the files you want into buffers. I regularly now work with 100 buffers open, but then I can very easily switch between them – :b [part of file name] then <Tab> and you switch to the other file, if you have more than one file open with that bit of the name then you just tab through the list, e.g. :b Controller will allow you to tab through all the *Controller* files (buffers) that you have open.
  12. Not strictly Vim itself, but it has excellent integration with FZF and Ripgrep, which are Rust commandline tools for fuzzy file finding and ‘find in files’. These tools are ridiculously fast. Having a fuzzy file finder means that you don’t need the folder structure on the left any more. Ripgrep works better on Linux but in any place it will churn through GB of source code. Also once you have the search results you can do more with them, they open up in a standard Vim ‘window’ and so you can search/highlight in your search, can then also run search/replaces on the list that you get back.
  13. Vim sessions are what allow Vim to work in a similar way to Sublime Text in that you can save all the open files that you had when you close Vim and open up exactly where you were last time.
  14. But Vim sessions are really flexible, the one great thing I’ve found about them is that I can combine all the projects that I’m working on into one. My colleagues use various other IDEs and we have a set of projects each with their own git repo and docker container. My colleagues need to switch projects each time they want to look at code in one project. However I can put all the repos in one folder and then create my Vim session above all of them. Then FZF can find any files amongst them, Ripgrep can search through all of them at the same time. So it means I can jump-to-definition across any project that I have.
  15. Combining all you do with other tools in one. Here’s a few things that I now do in Vim that I used to use other tools for: file diffs, git diffs, subversion diffs, todo lists, database connections/commands, git conflicts, subversion conflicts. This is not quite a case of Emacs where you never need to leave it again, but all my development tools work perfectly inside Vim, so I can use the power of the various commands I’ve learnt in Vim across these other tools
  16. Git diffs, this is a surprising one, but once you start using Fugitive plugin doing a side by side diff is easy and comes with nice syntax highlighting
  17. Git conflicts are handled beautifully with the Fugitive plugin, the majority of developers that I know only know how to use SourceTree or the output from Bitbucket diffs. With Fugitive you can do a 3-way vertical diff (see the Vimcast on Git conflicts), so you have the conflicted file in the middle with the two files you want to merge either side. It is the nicest way possible to do a merge. Even the GUI tools that I’ve seen that do do a 3-way merge are pretty ugly. Meld is quite good one for Linux and Windows, but it’s not fully supported on Mac, but this suffers from being slow. In Vim everything is fast and again I’ve got all my Vim tools handy as the diff windows are just Vim windows.
  18. Todo lists is a simple one – but you have things like Org mode in Emacs that you can replicate in Vim, but for the most part Markdown does everything you need.
  19. Database connections are always done in a special application. The main one I’ve used is SQL Server Management Studio (SSMS) – but of course that only works for SQL Server. If you work with MySQL either you need to use things like PHPMyAdmin or just use the MySQL command line, there are sometimes closed source tools for connecting to various databases but I’ve never particularly liked one. Tim Pope recently created the dadbod plugin that allows you to connect and run commands on all the major databases. This means that like SSMS I can have my SQL file open with syntax highlighting but then I can highlight a few lines and run those. This is super powerful, you then of course get all the query results in a Vim window and can use all the regular commands to search that and copy paste text from there. I still regard SSMS as the most powerful SQL editor that I used, but now I can have the majority of the functionality that I used there but for any database. I don’t have the things like query optimisations, but it’s rare that I need that.
  20. Making a tailored editor… typically all you do with other editors is install a few plugins. With Vim it’s expected that you’ll customise almost everything. People with ST share their list of plugins, where as people with Vim share their .vimrc file which contains all their plugins and all their settings. It’s the difference between an off the peg suit and a tailored suit, other people might not see the difference but you will feel it. You create Vim exactly as you want it.
  21. Made by individuals…
  22. Fully free and open source, it’s inspired a whole bunch of new editors – neovim, gonvim, AMP…
  23. Touch typing becomes more important. Once you use the keys for everything then you encourage yourself to touchtype more. This adds benefits to your coding. And as Joel Spolsky says, fast accurate typing is one of the fundamentals of a developer. I’m still not great at this but using Vim is helping me to improve.
  24. Split windows are something that I never bothered with in ST, but recently they’ve become very useful. When I’m trying to implement a new feature based on someone elses code I find it useful to have a side-by-side view of the two files. Further I can have the main code I’m working on in one window and then search throughout the code in the other window. Again you can do things like this in ST but I never really started doing this until I got used to Vim and Vim split windows.

Lesson 1: Install GVim

GVim is by far the best way to get introduced to Vim, it is a much more standardised way of using Vim rather than starting in the terminal and hitting problems. I really want to encourage people to try using Vim in the DOS prompt just because it’s amazing to finally see it there but for anyone starting just use GVim. I still use GVim on Windows as there’s still a frustrating slowness to editing in the DOS prompt but almost all my other gripes with it have disappeared over the last two years – the Windows team changing it are doing an amazing job.

Nevertheless, we’ll start with GVim, as well as being more consistent it allows for discovery as it has a lot of common menu commands at top that typically say what the commands are so that you can slowly familiarise yourself with it.

I suggest installing GVim via Chocolatey, or otherwise you can just download it and install it from the vim.org site (that’s all Chocolatey does behind the scenes).

Hopefully it also means I can help more people, Powershell users can probably translate the DOS commands more easily to Powershell than vice-versa. Linux and Mac users used to using GUI tools should be able to figure it out too. When I write Ctrl + C people will understand, when I write <C-c> users unfamiliar with Vim / Emacs will stare blankly.

Install Vim in DOS (not required)

If you’ve installed GVim then this also installs a command line version of GVim. The good part of this is that it comes with the most recent version of Vim – currently 8.1. There are some very nice things that have been added in the most recent version that improves the colour handling inside the Windows 10 DOS prompt.

Add C:\Program Files\GVim\bin to your PATH.

I love using Vim inside the DOS prompt. I think it is the simplest, purest way of using Vim in Windows.

Vim 7.4 also comes with Git for Windows. You can install this via Chocolatey, or just via the Git website.

> choco install -y git

We then need to add the GNU usr tools to our PATH – add C:\Program Files\Git\usr\bin to your PATH.

This gives you all the loveliness of all the GNU tools e.g., lsgrep as well. If you really want to do yourself a favour install clink and solarized DOS prompt colours too.

Lesson 2: Basic commands

You can skip this if you know the commands. I knew the basics of these for years before I started immersing myself in the rest of Vim.

Inserting code

You go into the INSERT mode by hitting the i key and switch back to NORMAL mode by hitting escape.

Once you’re in edit mode then it’s fairly similar to other editors, you can move left, right, up, down with the arrow keys, then just type and delete stuff with the backspace or delete keys.

Initially to be more familiar with other non-modal editors most users will spend all their time in INSERT mode. I personally think there is nothing wrong with this and this is exactly what I did to be as productive as possible in the beginning.

To be more productive though, it is necessary to learn the other Vim commands, otherwise you’re just taking away all the other features that you’re used to in ST which almost all do exist in Vim, just that they’re more hidden or you need to install a plugin for it.

Searching / moving code

A lot of the Ctrl + ... commands that you expect from other editors are handled in Vim’s NORMAL mode – you should see the word NORMAL in the bottom left-hand corner.

This is the weirdest part of Vim, that you delete words via three or four letter commands.

CommandSublime TextVim
UndoCtrl+zu
RedoCtrl+yCtrl+r
First lineCtrl+HomeCtrl+Home / gg
Last lineCtrl+EndCtrl+End / G
Line NCtrl+gNEnterNgg
End of the lineEndEnd / $
Start of the lineHomeHome / 0
Next wordCtrl+Rightw
Previous wordCtrl+Leftb
Page upPg UpCtrl+u
Page downPg DnCtrl+d
FindCtrl+f[text]Enter (forward)/[regex] (forward) / ?[regex] (back)
ReplaceCtrl+h[search][replace]Enter (forward):s/[search]/[replace]/Enter (line) / :%s/[search]/[replace]/Enter (global)

I actually practiced the commands by installing an Android app with Vim commands and the beginnger free part of shortcutFoo Vim.

After those commands the next most important one is :. This is the most common way of starting the command line typing at the bottom. It’s similar to when you type Ctrl + P into ST.

The first command to type is :help, this shows the first cool thing of Vim – split windows as standard.

The weirdest concept I had (after years of very light usage) is typing :w instead of :x to write the file, because now we actually want to stay in Vim, rather than get the hell out as fast as possible.