Push Down Computation and In-Memory Cubes

JReport provides huge performance gains through "push down" summary computations and in-memory cubes. See the FAQs below.

What advantages does JReport's in-memory cube have over other cube technologies?

JReport's in-memory cube technology allows you to use your normal queries and business view meta-data and simply check a box to initiate cube creation. There are no complex data extractions or schema changes required on the user's part. JReport creates the cube with each aggregation for each dimension and saves it until it is needed by a report. You can either schedule it to run automatically at certain times or you can set-up automatic cube initialization when it is first used. A time out value such as 1 hour can be assigned to each cube to allow cube usage for 1 hour. After that, the next request for data from the cube will trigger the auto-generation of a new cube, with updated data.

What is "push down" summary computation technology and how does it work?

"Push down" means pushing the group level calculations from JReport Server down to the DBMS at runtime. By pushing down these calculations, the DBMS' grouping and calculation capability can be used, and thus the reports' running and analysis efficiency will be improved. As for how it works, the JReport engine will analyze the queries first. If the queries and the report components satisfy the conditions for "push down", JReport will reorganize the SQL statement to push the aggregation calculation down to the DBMS based on the SQL 92 standard. For example, JReport will auto-generate the aggregation expressions in the select clause and add group by and order by clauses.  The DBMS will handle the group aggregations and return the result sets to the JReport engine, which then lays out the report using the result sets.

What are in-memory cubes and why are they important?

In-memory cubes contain the cached data used for analysis and reporting. Previously a report's data needed to be fetched from the DBMS each time the report was run. In JReport 11.1 the fetching data process can be scheduled to run at specified times. When the user runs the report and performs analysis actions such as filtering, drilling and adding/removing groups the data can be retrieved just using the saved data in the cube without going back to the DBMS. Without in-memory cube technology, when users do any analysis action such as filtering, drilling or swapping groups, JReport will have to re-fetch data each time from the DBMS, which is much more resource intensive.

Does a report need to be created specifically to use an in-memory cube?

Cube creation is transparent to the user. The report templates are identical either way. Create the cube and the report uses the cube. Delete the cube and JReport gets the data directly from the production database in real time. No changes to the report template are required.

Why does "push down" computation technology provide such a big increase in performance (up to 100x)?

"Push down" computation technology produces large increases in performance because the need to fetch large amounts of data from the DBMS for aggregation functions is eliminated. While that may seem simple enough the key is that SQL aggregation queries remain unchanged while JReport automatically reorganizes the SQL statements and "push down" functions such as average, sum, count, min, max, etc. For relatively small data sets of 10,000 records, the performance difference is not as great, but for large sets of millions of records, the difference is much more pronounced (up to 100x). These performance increases were measured with a performance comparison benchmark using TCP-H data.