Skip to content

Optimizing Server side rendering

Petter Eriksson edited this page Dec 6, 2017 · 1 revision

This page contains notes taken from our internal jira.

After profiling our app a little, optimizing query/navigation and removing some unnecessary work from render-page (in ui.clj) this is where we're at:

- 88.3% render-page
  - 42.8% om.dom/render-to-str
  - 28.0% make-reconciler
    - 22.7% server-send
      - 13.7% merge!
      - 9.0% server-parse
    - 4.1% into
  - 17.5% render-root!
    - 14.5% client-parser
      - 8.0% query/navigation, wat.
    - 1.3% parser-meta 
      - is this needed for server rendering?

Ideas

Persistent server clients

My thoughts to optimize this would be to have cached reconcilers for each route, minimising the time for merge! and hopefully server-parser would get faster. Since we cache the datascript instances in the reconcilers, we'd also be able to incrementally update our client reads. This could potentially bring down the "make-reconciler" and "render-root!" calls to something very small. We could only keep these persistent server clients for non-authed users. Authed users should be able to use whatever is in the non-authed reconcilers as a base line, and do a gather-sends, server-parse and merge! based on the non-authed datascript state and read-basis-t-graph. The non-authed reconciler's won't have any read-basis-t-graph entries for reads it cannot access. We should be able to take this new state and render without om/add-root!, skipping the indexing step.

om.dom/render-to-str optimizations

Most of the time is spent in our component's (render [this] ...) call, and not on the StringBuffer creation. Could we maybe cache the return of (render [this] ...) and use the cache whenever the props are equal for the component? This might be possible to do in (om.dom/render-component [c]). More hammock time is needed.