Bascule has been sitting quietly, generating this basic website but not doing much more. It has served my immediate need - to replace my previous WordPress driven website with something a bit more custom and original - but there's plenty more to do. There are some key deficiencies I need to address:

  • Support non-blog pages which sit outside of the date folder structure (mostly in place, just untested)
  • Extend the markdown processing to allow me to embed images and other media into web pages, or to apply specific CSS classes to text blocks
  • Properly support assets which are part of the theme, and assets which are separate from the theme
  • Support uploading the generated content to a web host, via SFTP, rsync or some other solution
  • Make Bascule more fault-tolerant, or possibly stricter?

Then there are ideas I have which could extend Bascule beyond a simple static site generator, for instance:

  • Additional output channels beyond HTML, such as PDF, Atom/RSS, or something more custom
  • Plug into a search indexer such as solr or elasticsearch, so that I could add a live search engine to my otherwise static site
  • Support other text formats beyond markdown
  • Provide more configuration around URLs (for instance, including dates as part of the ULR structure)

To achieve these goals, I'm taking a long hard look at the current code structure. I need to make it more flexible, and more extensible. I'm imagining a pipeline approach, along the lines of:

  1. scan input folders
  2. build metadata model
  3. parse input content - currently markdown, other formats later
  4. build index page
    • for each output channel? HTML, PDF, more?
  5. build each post/page
    • again, for each output channel. Could this be overridden at template level? Prevent certain pages appearing in certain channels?
  6. build post/page navigation pages (see posts/lists1.html for an example)
  7. build taxonomy tag navigation pages (see tags/kotlin/kotlin1.html)
  8. pass content to other generators or processors
    • for instance, to build a solr search index
  9. transfer files to server

This all requires quite a bit of thought and planning. I'd love for others to be able to plug to this pipeline with their own code in the future.

But for now, I'll experiment with some refactoring.