Course by:

Course Highlights

  • Submitted to NCVET for NSQF Alignment.
  • The Data Application Builder’s Workshop (DABW) is the third workshop in the Snowflake’s Hands-on Essentials Workshop series.
  • The workshop covers the creation of applications both inside of Snowflake (Streamlit in Snowflake) and outside of Snowflake (Original Streamlit). It includes a review of many of the database concepts covered in workshops DWW and CMCW. It also introduces concepts like User Defined Functions, Variables, and Interfaces for those new to data careers.
  • The workshop requires hands-on lab work to earn a badge. The lab work is auto-graded.
  • Skill Type

  • Course Duration

  • Domain

  • GOI Incentive applicable

  • Course Category

  • Nasscom Assessment

  • Placement Assistance

  • Certificate Earned

  • Content Alignment Type

  • NOS Details

  • Mode of Delivery

Course Details

Learning Objectives

What will you learn in Snowflake Hands-on Essentials Series Part 3 - Data Application?

The Snowflake Hands-on Essentials Series - Data Application aims to equip learners with the practical skills to build data applications powered by Snowflake. Here's a breakdown of the key learning objectives:

  • The primary objective is to enable learners to build functional applications that interact with Snowflake data. This includes understanding how to both write data to and read data from Snowflake.
  • Gain proficiency in using Streamlit to create interactive web applications that connect to Snowflake.
  • Learn to build user interfaces (UIs) for data input and visualization.
  • Learn to use Streamlit in Snowflake.
  • Develop the ability to write Python code to insert and retrieve data from Snowflake tables.
  • Learn how to use Python to process and manipulate data within the application.
  • Understand how to make REST API calls to collect data from external sources.
  • Learn how to use API keys for secure data access.
  • Gain practical experience in using GitHub for version control and collaborative development.
  • Learn how to manage code repositories and track changes.
  • Learn to use the SnowSQL command line interface to interact with Snowflake.
  • Gain a basic understanding of Snowpark and how it enables data processing with programming languages.
  • Develop hands-on experience through practical labs and exercises.
  • Build functional data applications that demonstrate acquired skills.
  • Understand how automated lab grading, using DORA metrics, works.
Read more
Reasons to enrol

Why should you take Snowflake Hands-on Essentials Series Part 3 - Data Application course?

At the end of this course, the learner will be able use Snowflake to:

  • Create a Streamlit in Snowflake (SiS) App that interacts with both the user and a backend database.
  • Create a Streamlit App that interacts with both the user and a backend database.
  • Apply Snowflake skills learned in previous series workshops with less step-by-step instruction to achieve development goals.
Read more
Ideal Participants

Who should take Snowflake Hands-on Essentials Series Part 3 - Data Application course?

  • Designed for people new to Snowflake or new to database work in general. The course can be used by managers who simply want to understand what Snowflake is generally capable of, or it can be used by those considering a career or as a data professional. Likewise, seasoned data professionals find the courses in this series a quick and easy introduction to tasks they are already familiar with, in a tool they are not.
Read more
Curriculum

Curriculum for the Snowflake Hands-on Essentials Series Part 3 - Data Application Course

  • Building Data Applications.
  • Utilizing Streamlit.
  • Python Integration.
  • API Interaction.
  • Version Control with GitHub.
  • SnowSQL CLI.
  • Snowpark Introduction.
Read more
skills and tools

Tools you will learn in Snowflake Hands-on Essentials Series Part 3 - Data Application course

  • Create a Streamlit in Snowflake app.
  • View the stage created to house your SiS app.
  • Make light edits to the template SiS code.
  • CHALLENGE: Change the emoji in a Streamlit title.
  • Use Streamlit documentation to copy templated code.
  • Use st.selectbox.
  • CHALLENGE: Create a table.
  • Download and view a text file.
  • Using the Load Data Wizard to create a file format.
  • Create a Snowflake-Managed (internal) stage.
  • Load files into an internal stage.
  • Use VALIDATION_MODE.
  • Query staged data.
  • COPY INTO with reordered columns.
  • St.dataframe.
  • Import the col function from snowpark.
  • Use snowpark select(col syntax.
  • Use st.multiselect.
  • Use st.write.
  • Use st.text.
  • Build an IF block.
  • CHALLENGE: Create a table (ORDERS).
  • Initialize a variable.
  • Build a FOR loop.
  • Create and use a SQL INSERT using Python/Streamlit.
  • Use st.button.
  • View files in the stage that houses your SiS app.
  • Create a Resource Monitor.
  • View the Snowflake Anaconda channel to see a list of packages supported for use in SiS.
  • CHALLENGE: Modify credits on Resource monitor.
  • Use st.text_input.
  • ALTER TABLE..ADD COLUMN.
  • CHALLENGE: Create a new SiS app.
  • Use st.experimental_data_editor or st.data_editor.
  • Add a SQL Merge (written in Snowpark syntax).
  • Create a sequence.
  • CHALLENGE: Truncate a table.
  • Add a column that uses a SEQUENCE as default value AND has a constraint of UNIQUENESS enforced.
  • Audit object ownership using the home page object browser.
  • Use a TRY/EXCEPT Block.
  • Use max_selections property on multiselect.
  • Create and call variables from a worksheet.
  • Create and call a UDF.
  • CHALLENGE: Given requirements, write a UDF.
  • Create a GitHub Account.
  • Create a GitHub Repo.
  • Create files in the repo you created.
  • Transfer code from SiS files to GitHub files.
  • Make a few changes to the files. Do not try to run them yet.
  • Set up a Streamlit Account.
  • Give the Streamlit Account access to your GitHub Account.
  • Set up the SECRETS file in your app.
  • Add the python requests library to a Streamlit project.
  • Call an external api from a Streamlit app.
  • Use the json() python function.
  • Get a variable value from user input and add it to some output on the screen.
  • CHALLENGE: Add a column to a table and populate it with a default value. Then update several rows with custom values.
  • CHALLENGE: Add the pandas library to the project and the file.
  • Use a different value in a SQL command than what was chosen in the UI.
  • Use edit/commit cycle for files in GitHub.
Read more