Lowdown & Shell
Get it? It’s like Netflix & Chill!
As usual, I owe you, or take the opportunity to, explain how the build script of this site works, and what corners I had to cut to make it happen. And the workarounds I had to do if there are any. I won’t go through every line in the script, but I will try not to leave anything important. If you think I did, please reach out and let’s discuss. Maybe I’m right, or maybe you’re wrong!
The heart of the script is the makepage
function. It does two things: first it “calculates” the breadcrumbs of the current page, then it populates the contained HTML template with the page content and metadata.
makepage () {
if [ "$( basename $1 )" = "index.md" ]; then
local parent="$( dirname $1 )"
else
local parent="${1%.md}"
fi
mkdir -p "output/${parent}"
local components="${parent}"
while [ "$components" != "." ]; do
local breadcrumbs=" / <a href=\"${base_url}/${components}\">$( basename ${components} )</a>${breadcrumbs}"
components="$( dirname ${components} )"
done
[ "$4" = "ar" ] && local dir="rtl"
cat << EOF > "output/${parent}/index.html"
# HTML TEMPLATE GOES HERE!
EOF
}
The trick I used here is that I loop on the path, prepending the link template of the current path to the old breadcrumbs string, that may be empty in case of the very first iteration. Then the path gets updated with the parent of the current path. All this while the path is not the root directory of the project .
.
The thing I have to live with here is the first /
. It doesn’t hurt and I like how it looks, so I didn’t bother complicate things more just to remove it.
Then there are a couple of nested loops, the outer one loops on blog sections, which are any directory under ./blog
. The first thing done here is to check if this directory has index.md
file to extract the title from, if not the title is just the slug or the directory name.
if [ -f "${section}/index.md" ]; then
section_title="$( lowdown -X title ${section}/index.md 2> /dev/null )"
cp "${section}/index.md" "${section}/index.md.old"
else
section_title="$( basename ${section} )"
[ "${section}" = 'blog' ] || printf '# %s\n\n' "${section_title}" > "${section}/index.md"
fi
I wanted to be able to add content to the section page later, while being able to add the list of pages under each section. At the end of the script, the index.md
is removed and the index.md.old
is restored.
Another trick in the sections loop is that I use -depth
option for find
. This way the result is the most inner directory first. This helps with the inner loop, because I use -maxdepth 1
for find to be able to add only this section link to the page, and not process the same page multiple times when I move up the directories.
After getting the page metadata using lowdown -X
, and checking if I this is a published page or a draft. makepage
is called and then an entry is added to a temporary list file.
if [ -n "${date}" ]; then
makepage "${post}" "${title}" "${date}" "${lang}"
printf -- '- [%s] [%s](%s)\n' "${date}" "${title}" "${path}" >> section.list.temp.md
if [ "${section}" = 'blog' ]; then
printf -- '- [%s] [%s](%s)\n' "${date}" "${title}" "${path}" >> blog.list.temp.md
else
printf -- '- [%s] [%s](%s) [[/%s](/%s)]\n' "${date}" "${title}" "${path}" "${section_title}" "${section}" >> blog.list.temp.md
fi
printf '[published:%s] %s: %s\n' "${post#blog/}" "${date}" "${title}"
else
printf '[draft:%s] %s\n' "${post#blog/}" "${title}"
fi
After processing all the pages in a section, and sort
ing the entries by date, the section page is created using makepage
, and index.md
is restored.
After that, the collective blog archive is generated, as well as list pages (currently: presentations and projects) and some of their content is copied to the homepage. license page is created as well, and static files are copied to the output directory.
The last page that is created is the homepage. After which the temporary files are removed and the original index.md
is restored.
The current version of the complete build script is here.
A Sprinkle Of Magick
Posting images had a problem that needed to be taken care of, compressing, resizing, and removing EXIF metadata. The prep.sh
script comes to the rescue.
It is basically a loop calling mogrify -strip -resize 800x "${source}"
on image file in the images_src
directory and moving them to images
directory. I call this script before committing any updates to the upstream repository. The images_src
is also added to .gitignore
to keep the upstream repository clean.
In the future, I may add more stuff to prep
like spell check.
That’s it for now. Till next time :)