No surprise I use python, but I’ve recently started experimenting with polars instead of pandas. I’ve enjoyed it so far, but Im not sure if the benefits for my team’s work will be enough to outweigh the cost of moving from our existing pandas/numpy code over to polars.

I’ve also started playing with grafana, as a quick dashboarding utility to make some basic visualizations on some live production databases.

  • Kache@lemm.ee
    link
    fedilink
    arrow-up
    1
    ·
    17 days ago

    What kind of query optimization can it for scanning data that’s already in memory?

    • rutrum@lm.paradisus.dayOP
      link
      fedilink
      English
      arrow-up
      6
      ·
      16 days ago

      A big feature of polars is only loading applicable data from disk. But during exporatory data analysis (EDA) you often have the whole dataset in memory. In this case, filters wont help much there. Polars has a good page in their docs about all the possible optimizations it is capable of. https://docs.pola.rs/user-guide/lazy/optimizations/

      One I see off the top is projection pushdown, which only selects relevant columns for a final transformations. In pandas, if you perform a group by with aggregation, then only look at a few columns, you still perform aggregation across all the data. In polars lazy API, you would define the entire process upfront, and it would know not to aggregate certain columns, for instance.

      • Kache@lemm.ee
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        16 days ago

        Hm, that’s kind of interesting

        But my first reaction is that optimizations only at the “Python processing level” are going to be pretty limited since it’s not going to have metadata/statistics, and it’d depend heavily on the source data layout, e.g. CSV vs parquet

        • rutrum@lm.paradisus.dayOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          16 days ago

          You are correct. For some data sources like parquet it includes some metadata that helps with this, but it’s not as robust at databases I dont think. And of course, cvs have no metadata (I guess a header row.)

          The actually specification for how to efficiently store tabular data in memory that also permits quick execution of filtering, pivoting, i.e. all the transformations you need…is called apache arrow. It is the backend of polars and is also a non-default backend of pandas. The complexity of the format I’m unfamiliar with.