We're the top company on Modern Data Stack! Give us an upvote here.
bubbles svg

Cloud Data Viz and Analytics Health Check

Uncover the fitness of your Cloud Data Viz & Analytics

Get my free score

5 Snowflake Features I Can’t Live Without


As a Senior Product Manager at Astrato Analytics and a Snowflake Data Superhero, I’m often asked which Snowflake features are indispensable in my daily workflow. Snowflake’s capabilities go far beyond traditional data warehouses, and there are a few standout features that make a real difference in how we at Astrato create innovative data solutions. Here are the top five Snowflake features I can’t live without:

1. Time Travel in the context of Data Analytics:

Snowflake’s Time Travel feature is a game-changer. It allows you to access historical data at any point within a defined retention period, which is incredibly useful for recovering accidentally deleted data or analyzing what data has changed and when. But it can also be used within the context of analytics – it is amazing for looking at any changes in the data, removing the need for snapshots – particularly with writeback-driven data apps, operational applications where users also write data back, e.g. data capture, or commentary.

When looking at real-time data (think call-centers) – this is massively helpful to keep the data being analyzed, at a standstill, to maintain a consistent story with your data.


With time-travel enabled, I’m able to leverage the UNDROP TABLE command, in particular, has saved me more times than I care to admit. This feature ensures that even if something goes wrong, you can quickly revert to a previous state without missing a beat, especially when building out a new DB or schema.

-- Recover a table that was accidentally dropped
UNDROP TABLE my_important_table;


2. Snowflake Cortex: Integrated LLMs & Machine Learning

LLMs can accelerate your data analytics experience, so it’s an absolute game changer seeing Snowflake Cortex integrates machine learning & LLMs directly within the data warehouse, allowing for real-time data science & LLM-driven experiences, in Data Apps without moving data — bring LLMs to your data, faster, safer, cheaper.

Here’s the framework I like to use when working with cortex, including a clear distinction, between the purpose/context of the request, rules/constraints and finally the user request. 


- You must provided a text based analysis based on the user task, you only use the data provided below. 
- Perform the analysis yourself, do not run or provide any further code.
- Describe insightful RESULTS of analysis
- DO NOT describe analysis steps

With Cortex, Astrato Analytics can connect to LLM & ML models for predictive analytics and advanced data processing right where the data lives. This seamless integration speeds up operational workflows such as forecast scenario modeling and enhances our analytical capabilities for business users across the entirety of your organization.

Best used with Astrato actions to build no-code AI accelerated Data Apps, helping business users get fast answers to business questions or to summarize data. I prefer to use stored procedures here, but you can use a standard select statement in a custom SQL block. In the blocks below, I call a Cortex LLM stored procedure which already has a chosen model – I provide information about my data view/semantic layer and can then ask a business question (see our demo, more on this soon!).

3. Role-Based Access Control (RBAC)

Security and data governance are critical, and Snowflake’s RBAC ensures that we can tightly control access to data and actions within our platform. By defining roles and permissions, we can ensure that users have appropriate access levels, enhancing both security and compliance.

-- Create a new role and grant permissions
CREATE ROLE analyst;
GRANT ROLE analyst TO USER jane_doe;


I felt this feature was so important, that I had made sure we directly leveraged this Snowflake feature in our BI product – this way, security is done right once, in Snowflake. There is a longer document here on how it works. For Snowflake customers, this means BI & Analytics really is connect & go – there is no need to recreate or replicate security done elsewhere, it is centrally managed, just like your data is.

💡Not all data requires this! It is best to use a service account if all users are accessing the same data, since this will make more effective use of Snowflake’s cache.


4. Multi-Cluster Warehouse: Concurrency solved

Scalability is a non-negotiable requirement for us at Astrato, and Snowflake’s Multi-Cluster Warehouse provides exactly that. It allows for automatic scaling of compute resources to handle varying workloads, ensuring consistent performance during peak times without manual intervention. This is crucial for maintaining seamless user experiences and efficient processing. 

For me,  this means high-performance in live-query analytics tools like Astrato, without any queued queries when thousands of business users are accessing business-critical data.

-- Create a multi-cluster warehouse


5. WIDTH_BUCKET: Underrated Powerhouse

The WIDTH_BUCKET function might be underrated, even seen as a standard feature, but it’s an incredibly powerful function for data analysis and visualization. It simplifies the process of segmenting data into equal-width bins, which is essential for creating histograms and other forms of data distribution analysis. This function plays a vital role in how we visualize data in Astrato, making complex data sets more comprehensible and actionable.

 WIDTH_BUCKET(sales_amount, 0, 10000, 10) AS sales_bucket,
 COUNT(*) AS count
FROM sales_data
GROUP BY sales_bucket
ORDER BY sales_bucket;

The synergy of these features in Snowflake not only supports but also elevates the capabilities of Astrato Analytics. By leveraging these tools, we ensure our data solutions are robust, scalable, and secure, driving more informed decision-making across organizations.


A real example – you can imagine for lead time (how far in advance a booking was made), we had around 1000 unique numbers – we couldn’t chart that. Width bucket (so good we decided to build it into Astrato’s no-code UI) enabled us to still visualize this, across smaller buckets.

((WIDTH_BUCKET(PURCHASE_LEAD, 0, 140, 20) - 1) * 140/20)::int END as "Booking_Lead_Time_(days)",
((WIDTH_BUCKET(LENGTH_OF_STAY, 0, 70, 10) - 1) * 70/10)::int END as "Length_of_Stay_(days)"