Artemis

Read - School-time projects

Simple web-based Markdown renderer

Published the 2018/10/04

About one year passed by, and I'm back to school. With that obviously comes school-time projects!

The project I worked on for the last two days (and which is now in version v0.3, and working perfectly) is a "new" lightweight and minimalistic markdown renderer.

For the context side, I'm using a VPS to store some data, including some markdown documents, either article drafts, presentations notes or simply documents I want to share.

I wanted to have two things:

For the first point, I still wanted to use the markdown renderer for "private" documents, while hiding them from the directory.

The idea struck me a few days ago, while working on some markdown documents I wanted to share with work colleagues for a project, but that I didn't want to publish.

And this article is here to document this little journey.

# The idea

I wanted to have a small and easy-to-maintain server that'd work the following logic:

When asked for a given markdown file, load it, parse front-matter tags, and only returns the actual markdown body,
either as raw or html-rendered.
I want to have an index on which I want to be able to index some files, but not all of them.
This is done by a front-matter tag (`indexed`), and a small loop that'll only keep files with the indexed status set to true.
Every other case won't show up.

I wanted two routes:

To access the raw document, I didn't want to have yet another route, so I decided to go with a GET parameter acting as a flag (raw). That means that on every valid /:page URL, if the user adds ?raw at the end, he'll receive the plaintext content.

That's all I care about for the first version.

# Chosen stack

I decided to go with NodeJS, ExpressJS and EJS for this quick project.

For front-matter tags parsing, I went with FrontMatter, and Marked is tasked with converting markdown to HTML.

Note that, right now, the project is in v0.3 and introduced a few concepts, like syntaxic coloration with highlight.js, cache with redis etc.

Also note that the full dependency list can be found on the project's readme.

For the CSS UI, Skeleton is more than enough, but I just need to add a small tweak to limit images by maximum width, which is quick to add.

# The development

The development started with architecturing everything, especially expressjs, to allow for clean and easy development.

NodeJS gave me quite some trouble with filesystem manipulations due to the poorness of its standard library, but also the ambiguity of its documentation, especially around basic functionalities, such as file listing, directory checking and such.

The lack of documentation around integration of ejs in expressjs had me fiddle around for some time, but some blog articles managed to fill out the void caused by this absence.

The routes were pretty straightforward, as the following snippet shows.

app.get('/', async (req, res) => {
    const list = await reader.list();
    res.render('list', {
        list,
        count: list.length
    });
});
app.get('/:file', async (req, res) => {
    const fileId = req.params.file;

    const file = await reader.load(fileId);

    if (file === null) {
        return res
            .status(404)
            .send('Markdown document not found');
    }

    // Conditional rendering
});

Note that the reader object used in both functions is a small class I made for properly handling filesystem interactions (listing, reading etc).

The overall tool development was pretty straightforward for obtaining a MVP without any cache or exportation mechanism, but I later decided to develop another two versions; one for caching, one for PDF exportation (+ caching).

While the PDF work required quite a lot of work (including developing embedded "cron" task to regularly clean up the PDF cache), the basic page rendering cache (using Redis) was awfully straightforward.

For every route, I simply added cache.route().

app.get('/', cache.route('list'), async (req, res) => {
    // ..
});
app.get('/:file', cache.route(), async (req, res) => {
    // ..
});

The nicest feature about this tool is the fact that there's no configuration file, since I only need two environment variables; one for the markdown document base path, and one for the cache folder.

# Conclusion

While the overall project, even the caching mechanism, was quite straightforward (besides a bit of fiddling with the docs), the biggest trouble I had in production was with wkhtmltopdf, which I chose for its lightness.

The problem was that I needed an X server running on my server, which is obviously unacceptable for a simple HTTP server.

After an instant of searching, I managed to find a simple solution, which was to install a lightweight X virtual server.

The following command managed to fix all my problems.

apt-get install wkhtmltopdf
apt-get install xvfb
printf '#!/bin/bashnxvfb-run -a --server-args="-screen 0, 1024x768x24" /usr/bin/wkhtmltopdf -q $*' > /usr/bin/wkhtmltopdf.sh
chmod a+x /usr/bin/wkhtmltopdf.sh
ln -s /usr/bin/wkhtmltopdf.sh /usr/local/bin/wkhtmltopdf

From this point, I now have a pretty lightweight Markdown reader, but it also exports files to PDF!