Up next you’ll find an overview of Pandas, a Python library which is old but gold and a must-know if you’re attempting to do any work with data while living in the Python world, and a glance of Seaborn, a Python library for making statistical visualizations. From our experience, they complement each other really well, and are worth learning together. We hope this post serves as a first guide for diving into them and kickstart your data handling & visualization journey.
By now, you’ll already know the Pandas library is one of the most preferred tools for data manipulation and analysis, and you’ll have explored the fast, flexible, and expressive Pandas data structures, maybe with the help of DataCamp’s Pandas Basics cheat sheet.
Yet, there is still much functionality that is built into this package to explore, especially when you get hands-on with the data: you’ll need to reshape or rearrange your data, iterate over DataFrames, visualize your data, and much more. And this might be even more difficult than “just” mastering the basics.
That’s why today’s post introduces a new, more advanced Pandas cheat sheet.
It’s a quick guide through the functionalities that Pandas can offer you when you get into more advanced data wrangling with Python.
This article on a complete tutorial to learn Data Science with Pyhon from scratch, was posted by Kunal Jain. Kunal is a post graduate from IIT Bombay in Aerospace Engineering. He has spent more than 8 years in field of Data Science. He learned basics of Python within a week. And, since then, he has not only explored this language to the depth, but also has helped many other to learn this language.
Python was originally a general purpose language. But, over the years, with strong community support, this language got dedicated library for data analysis and predictive modeling.
Due to lack of resource on python for data science, he decided to create this tutorial to help many others to learn python faster. In this tutorial, you will take bite sized information about how to use Python for Data Analysis, chew it till you are comfortable and practice it at your own end.
SQLite is a database engine that makes it simple to store and work with relational data. Much like the csvformat, SQLite stores data in a single file that can be easily shared with others. Most programming languages and environments have good support for working with SQLite databases. Python is no exception, and a library to access SQLite databases, called
sqlite3, has been included with Python since version
2.5. In this post, we’ll walk through how to use
sqlite3 to create, query, and update databases. We’ll also cover how to simplify working with SQLite databases using the pandas package. We’ll be using Python
3.5, but this same approach should work with Python
Before we get started, let’s take a quick look at the data we’ll be working with. We’ll be looking at airline flight data, which contains information on airlines, airports, and routes between airports. Each route represents a repeated flight that an airline flies between a source and a destination airport.
All of the data is in a SQLite database called
flights.db, which contains three tables –
routes. You can download the data here.
Pandas has got to be one of my most favourite libraries… Ever.
Pandas allows us to deal with data in a way that us humans can understand it; with labelled columns and indexes. It allows us to effortlessly import data from files such as csvs, allows us to quickly apply complex transformations and filters to our data and much more. It’s absolutely brilliant.
Along with Numpy and Matplotlib I feel it helps create a really strong base for data exploration and analysis in Python. Scipy (which will be covered in the next post), is of course a major component and another absolutely fantastic library, but I feel these three are the real pillars of scientific Python.
So without any ado, let’s get on with the third post in this series on scientific Python and take a look at Pandas. Don’t forget to check out the other posts if you haven’t yet!