2

Filter copy/paste, reorganising and sharing

We'd love some reorganisation and management features for filters across a project;

  • Ability to move individual filters up and down, to quickly reorganise the order in which they're processed.
  • Ability to move entire filter groups up and down, for the same benefit.
  • Ability to copy filters from one page to another (button to copy CSV/text version of filter to clipboard, then paste into another page perhaps?)
  • And the ability to create filter presets (both individual filters, and filter groups). For instance - Filter these assets by ones appearing in season 6 - our project has a crazy number of ways that these types of basic filter could be created, and some of them are horribly inefficient.

We suffer a lot of extremely long page loads due to the inefficiency of some our users filters. Just being able to rearrange filters rather having to delete them all and start from scratch, as well as having presets to choose from, would make a huge difference in the time it takes to test best practices and apply them across tens and tens of pages.

 

On a related note, some kind of performance metrics tool would be fantastic for us admins who want to see which filters are killing a page's load times. I'm sure something like this may already exist for backend, but is there a chance we could get a frontend tool to help us diagnose bad filtering?

Cheers

Paul,

1 comment

  • 0
    Avatar
    Tony Aiello

    Big upvotes for all the suggested filter-management features!

    Re your related note i.e. performance monitoring:

    We've been going down this road this year so far.  After extensive discussions with Sg Support, right now the bottom line is that there are some backend bits available but you're on your own for front-end.  And what you can do is largely determined by whether you're cloud- or local-hosted.  We're local-hosted.  So far, here's what we've done:

    - install the monitoring tools at https://github.com/shotgunsoftware/enterprise-toolbox -> in particular, the shotgun_log_analyzer.  It can give you the most-expensive queries for a day, by person and by page and by top-20 slowest queries.  I've set it up to run as a daily cron, archiving each day's log summary for tracking purposes.  Obviously not real-time, but helpful.  It would be great if the Shotgunners tweaked this script to include the actual # of hits per person / page / query so that medians and averages can be calculated.  Knowing that a single very-expensive query happened is good, but knowing that a pretty-expensive query that didn't quite make it into the top-20-slowest happened 100 times is even better.

    - install filebeat on the shotgun server to parse whatever log files you're interested in (production.log, apache_access.log, apache_errors.log, etc) AND setup an ELK-stack server (Elastic search, Logstash, Kibana) to which the filebeat results would get directed.  In particular, you'd set up a dashboard with two saved *bubble-chart* graphs: one for regular pages and another for detail pages.  These should graph the PAGE_TIMING results from the production.log.  What we've done is set those up to bucket the timing results in 30-minute increments (the X-axis) where the Y-axis is the total page load time and the "z-axis" i.e. each bubble's diameter is the number of hits on the page; each bubble in each 30-minute time bucket represents a single page.  We're still getting this up and running.  Here's an example of our progress; the 5-digit numbers in the legend are page numbers (it would be nice if Shotgun output the page *name* concatenated; we might write that as a customization to filebeat).

    - you may be able to get the Shotgunners to help you do something similar with statsd.  In that case you'll need a database server to consume and store the statsd output, and then you'd probably setup Grafana as a front-end on that db server, to do the same kind of plot I described above.  We're working on this too, which would mean we could skip the filebeat process, but it turns out that filebeat seems to be very lightweight and not impacting performance, so maybe not worth it.

    There's approximately 90 statistics coming out of the production.log / statsd that you could graph + summarize.

     

Please sign in to leave a comment.