{"id":1323,"date":"2020-11-23T10:27:54","date_gmt":"2020-11-23T10:27:54","guid":{"rendered":"https:\/\/thenextweb.com\/?p=1329038"},"modified":"2020-11-23T10:27:54","modified_gmt":"2020-11-23T10:27:54","slug":"a-beginners-guide-to-web-scraping-with-python-and-scrapy","status":"publish","type":"post","link":"https:\/\/www.londonchiropracter.com\/?p=1323","title":{"rendered":"A beginner\u2019s guide to web scraping with Python and Scrapy"},"content":{"rendered":"\n<p class=\"p1\">Since their inception,&nbsp;websites&nbsp;are used to share information. Whether it is a Wikipedia article, YouTube channel, Instagram account, or a Twitter handle. They all are packed with interesting data that is available for everyone with access to the&nbsp;internet&nbsp;and a&nbsp;web browser.<\/p>\n<p class=\"p1\">But, what if we want to get any specific data programmatically?<\/p>\n<p class=\"p1\">There are two ways to do that:<\/p>\n<ol>\n<li class=\"p1\">Using official API<\/li>\n<li class=\"p1\">Web Scraping<\/li>\n<\/ol>\n<p class=\"p1\">The concept of&nbsp;API (Application Programming Interface)&nbsp;was introduced to exchange data between different systems in a standard way. But, most of the time, website owners don\u2019t provide any API. In that case, we are only left with the possibility to extract the data using&nbsp;web scraping.<\/p>\n<p class=\"p1\">Basically, every web page is returned from the server in an&nbsp;HTML&nbsp;format, meaning that our actual data is nicely packed inside HTML elements. It makes the whole process of retrieving specific data very easy and straightforward.<\/p>\n<p class=\"p1\">This tutorial will be an ultimate guide for you to learn&nbsp;web scraping using Python programming language. At first, I\u2019ll walk you through some basic examples to make you familiar with web scraping. Later on, we\u2019ll use that knowledge to extract data of football matches from&nbsp;Livescore.cz&nbsp;.<\/p>\n<p><em>[Read:&nbsp;<a class=\"c-link c-message_attachment__title_link\" href=\"https:\/\/thenextweb.com\/neural\/2020\/11\/09\/neurals-market-outlook-for-artificial-intelligence-in-2021-and-beyond\/\" target=\"_blank\" rel=\"noreferrer noopener\" data-qa=\"message_attachment_title_link\"><span dir=\"auto\">Neural\u2019s market outlook for artificial intelligence in 2021 and beyond<\/span><\/a>]<\/em><\/p>\n<h2 id=\"getting-started\">Getting Started<\/h2>\n<p class=\"p1\">To get us started, you will need to start a new Python3 project with and install&nbsp;Scrapy&nbsp;(a web scraping and web crawling library for Python). I\u2019m using&nbsp;pipenv&nbsp;for this tutorial, but you can use pip and venv, or conda.<\/p>\n<p class=\"p1\"><em>pipenv install scrapy<\/em><\/p>\n<p class=\"p1\">At this point, you have Scrapy, but you still need to create a new web scraping project, and for that scrapy provides us with a command line that does the work for us.<\/p>\n<p class=\"p1\">Let\u2019s now create a new project named&nbsp;web_scraper&nbsp;by using the scrapy cli.<\/p>\n<p class=\"p1\">If you are using&nbsp;pipenv&nbsp;like me, use:<\/p>\n<p class=\"p1\"><em>pipenv run scrapy startproject web_scraper .<\/em><\/p>\n<p class=\"p1\">Otherwise, from your virtual environment, use:<\/p>\n<p class=\"p1\"><em>scrapy startproject web_scraper .<\/em><\/p>\n<p class=\"p1\">This will create a basic project in the current directory with the following structure:<\/p>\n<div class=\"highlight\">\n<pre><figure class=\"post-image post-mediaBleed alignnone\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-1329039 lazy\" src=\"https:\/\/cdn0.tnwcdn.com\/wp-content\/blogs.dir\/1\/files\/2020\/11\/Screenshot-2020-11-23-at-10.39.37.png\" alt width=\"811\" height=\"418\" sizes=\"(max-width: 811px) 100vw, 811px\" data-lazy=\"true\" data-srcset=\"https:\/\/cdn0.tnwcdn.com\/wp-content\/blogs.dir\/1\/files\/2020\/11\/Screenshot-2020-11-23-at-10.39.37.png 1398w, https:\/\/cdn0.tnwcdn.com\/wp-content\/blogs.dir\/1\/files\/2020\/11\/Screenshot-2020-11-23-at-10.39.37-280x144.png 280w, https:\/\/cdn0.tnwcdn.com\/wp-content\/blogs.dir\/1\/files\/2020\/11\/Screenshot-2020-11-23-at-10.39.37-524x270.png 524w, https:\/\/cdn0.tnwcdn.com\/wp-content\/blogs.dir\/1\/files\/2020\/11\/Screenshot-2020-11-23-at-10.39.37-262x135.png 262w, https:\/\/cdn0.tnwcdn.com\/wp-content\/blogs.dir\/1\/files\/2020\/11\/Screenshot-2020-11-23-at-10.39.37-796x410.png 796w\"><\/figure><\/pre>\n<\/div>\n<h2 id=\"building-our-first-spider-with-xpath-queries\">Building our first Spider with XPath queries<\/h2>\n<p>We will start our web scraping tutorial with a very simple example. At first, we\u2019ll locate the logo of the<span>&nbsp;<\/span><a href=\"https:\/\/livecodestream.dev\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Live Code Stream<\/a><span>&nbsp;<\/span>website inside HTML. And as we know, it is just a text and not an image, so we\u2019ll simply extract this text.<\/p>\n<p id=\"the-code\"><strong>The code<\/strong><\/p>\n<p>To get started we need to create a new spider for this project. We can do that by either creating a new file or using the CLI.<\/p>\n<p>Since we know already the code we need we will create a new Python file on this path<span>&nbsp;<\/span><strong>\/web_scraper\/spiders\/live_code_stream.py<\/strong><\/p>\n<p>Here are the contents of this file.<\/p>\n<div class=\"highlight\">\n<pre><figure class=\"post-image post-mediaBleed alignnone\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-1329040 lazy\" src=\"https:\/\/cdn0.tnwcdn.com\/wp-content\/blogs.dir\/1\/files\/2020\/11\/Screenshot-2020-11-23-at-10.40.43.png\" alt width=\"825\" height=\"578\" sizes=\"(max-width: 825px) 100vw, 825px\" data-lazy=\"true\" data-srcset=\"https:\/\/cdn0.tnwcdn.com\/wp-content\/blogs.dir\/1\/files\/2020\/11\/Screenshot-2020-11-23-at-10.40.43.png 1384w, https:\/\/cdn0.tnwcdn.com\/wp-content\/blogs.dir\/1\/files\/2020\/11\/Screenshot-2020-11-23-at-10.40.43-280x196.png 280w, https:\/\/cdn0.tnwcdn.com\/wp-content\/blogs.dir\/1\/files\/2020\/11\/Screenshot-2020-11-23-at-10.40.43-385x270.png 385w, https:\/\/cdn0.tnwcdn.com\/wp-content\/blogs.dir\/1\/files\/2020\/11\/Screenshot-2020-11-23-at-10.40.43-193x135.png 193w, https:\/\/cdn0.tnwcdn.com\/wp-content\/blogs.dir\/1\/files\/2020\/11\/Screenshot-2020-11-23-at-10.40.43-796x558.png 796w\"><\/figure><\/pre>\n<\/div>\n<p id=\"code-explanation\"><strong>Code explanation:<\/strong><\/p>\n<ul>\n<li class=\"p1\">First of all, we imported the Scrapy library because we need its functionality to create a Python web spider. This spider will then be used to crawl the specified website and extract useful information from it.<\/li>\n<li class=\"p1\">We created a class and named it&nbsp;LiveCodeStreamSpider. Basically, it inherits from&nbsp;scrapy.Spider and that\u2019s why we passed it as a parameter.<\/li>\n<li class=\"p1\">Now, an important step is to define a unique name for your spider using a variable called&nbsp;name. Remember that you are not allowed to use the name of an existing spider. Similarly, you can not use this name to create new spiders. It must be unique throughout this project.<\/li>\n<li class=\"p1\">After that, we passed the website URL using the&nbsp;start_urls&nbsp;list.<\/li>\n<li class=\"p1\">Finally, create a method called&nbsp;parse()&nbsp;that will locate the logo inside HTML code and extract its text. In Scrapy, there are two methods to find HTML elements inside source code. These are mentioned below.<\/li>\n<li class=\"p1\">CSS<\/li>\n<li class=\"p1\">XPath<\/li>\n<\/ul>\n<p class=\"p1\">You can even use some external libraries like&nbsp;BeautifulSoup&nbsp;and&nbsp;lxml&nbsp;. But, for this example, we\u2019ve used XPath.<br \/>A quick way to determine the XPath of any HTML element is to open it inside the&nbsp;Chrome DevTools. Now, simply right-click on the HTML code of that element, hover the mouse cursor over \u201cCopy\u201d inside the popup menu that just appeared. Finally, click the \u201cCopy XPath\u201d menu item.<\/p>\n<p class=\"p1\">Have a look at the below screenshot to understand it better.<\/p>\n<figure class data-src=\"\/post\/2020-11-18-how-to-turn-the-web-into-data-with-python-and-scrapy\/find-xpath_hub7e3e64a73ee4298452ddd712fc8bae5_469803_700x0_resize_q75_box.jpg\" readability=\"2\">\n<p><figure class=\"post-image post-mediaBleed aligncenter\"><img decoding=\"async\" loading=\"lazy\" class=\"lazy loaded lazy\" src=\"https:\/\/livecodestream.dev\/post\/2020-11-18-how-to-turn-the-web-into-data-with-python-and-scrapy\/find-xpath_hub7e3e64a73ee4298452ddd712fc8bae5_469803_700x0_resize_q75_box.jpg\" alt width=\"700\" height=\"356\" data-lazy=\"true\"><figcaption><a href=\"https:\/\/thenextweb.com\/syndication\/2020\/11\/23\/a-beginners-guide-to-web-scraping-with-python-and-scrapy\/#\" data-url=\"https:\/\/twitter.com\/intent\/tweet?url=https%3A%2F%2Fthenextweb.com%2Fsyndication%2F2020%2F11%2F23%2Fa-beginners-guide-to-web-scraping-with-python-and-scrapy%2F&amp;via=thenextweb&amp;related=thenextweb&amp;text=Check out this picture on: Find XPath using Chrome Dev Tools\" data-title=\"Share Find XPath using Chrome Dev Tools on Twitter\" data-width=\"685\" data-height=\"500\" class=\"post-image-share popitup\" title=\"Share Find XPath using Chrome Dev Tools on Twitter\"><i class=\"icon icon--inline icon--twitter--dark\"><\/i><\/a>Find XPath using Chrome Dev Tools<\/figcaption><\/figure>\n<\/p>\n<\/figure>\n<p>By the way, I used<span>&nbsp;<\/span><code>\/text()<\/code><span>&nbsp;<\/span>after the actual XPath of the element to only retrieve the text from that element instead of the full element code.<\/p>\n<p><strong>Note:<\/strong><span>&nbsp;<\/span>You\u2019re not allowed to use any other name for the variable, list, or function as mentioned above. These names are pre-defined in Scrapy library. So, you must use them as it is. Otherwise, the program will not work as intended.<\/p>\n<p id=\"run-the-spider\"><strong>Run the Spider:<\/strong><\/p>\n<p>As we are already inside the<span>&nbsp;<\/span><strong>web_scraper<\/strong><span>&nbsp;<\/span>folder in command prompt. Let\u2019s execute our spider and fill the result inside a new file<span>&nbsp;<\/span><strong>lcs.json<\/strong><span>&nbsp;<\/span>using the below code. Yes, the result we get will be well-structured using JSON format.<\/p>\n<div class=\"highlight\" readability=\"7\">\n<pre><code class=\"language-shell\" data-lang=\"shell\">pipenv run scrapy crawl lcs -o lcs.json <\/code><\/pre>\n<\/div>\n<div class=\"highlight\" readability=\"7\">\n<pre><code class=\"language-shell\" data-lang=\"shell\">scrapy crawl lcs -o lcs.json <\/code><\/pre>\n<\/div>\n<p id=\"results\"><strong>Results:<\/strong><\/p>\n<p>When the above code executes, we\u2019ll see a new file<span>&nbsp;<\/span><strong>lcs.json<\/strong><span>&nbsp;<\/span>in our project folder.<\/p>\n<p>Here are the contents of this file.<\/p>\n<div class=\"highlight\" readability=\"7\">\n<pre><code class=\"language-json\" data-lang=\"json\">[ {<span>\"logo\"<\/span>: <span>\"Live Code Stream\"<\/span>} ] <\/code><\/pre>\n<\/div>\n<h2 id=\"another-spider-with-css-query-selectors\">Another Spider with CSS query selectors<\/h2>\n<p>Most of us love sports, and when it comes to Football, it is my personal favorite.<\/p>\n<p>Football tournaments are organized frequently throughout the world. There are several websites that provide a live feed of match results while they are being played. But, most of these websites don\u2019t offer any official API.<\/p>\n<p>In turn, it creates an opportunity for us to use our web scraping skills and extract meaningful information by directly scraping their website.<\/p>\n<p>For example, let\u2019s have a look at<span>&nbsp;<\/span><a href=\"https:\/\/www.livescore.cz\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Livescore.cz<\/a><span>&nbsp;<\/span>website.<\/p>\n<p>On their home page, they have nicely displayed tournaments and their matches that will be played today (the date when you visit the website).<\/p>\n<p>We can retrieve information like:<\/p>\n<ul>\n<li>Tournament Name<\/li>\n<li>Match Time<\/li>\n<li>Team 1 Name (e.g. Country, Football Club, etc.)<\/li>\n<li>Team 1 Goals<\/li>\n<li>Team 2 Name (e.g. Country, Football Club, etc.)<\/li>\n<li>Team 2 Goals<\/li>\n<li>etc.<\/li>\n<\/ul>\n<p>In our code example, we will be extracting tournament names that have matches today.<\/p>\n<h2 id=\"the-code-1\">The code<\/h2>\n<p>Let\u2019s create a new spider in our project to retrieve the tournament names. I\u2019ll name this file as<span>&nbsp;<\/span><strong>livescore_t.py<\/strong><\/p>\n<p>Here is the code that you need to enter inside<span>&nbsp;<\/span><strong>\/web_scraper\/web_scraper\/spiders\/livescore_t.py<\/strong><\/p>\n<p><figure class=\"post-image post-mediaBleed alignnone\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-1329041 lazy\" src=\"https:\/\/cdn0.tnwcdn.com\/wp-content\/blogs.dir\/1\/files\/2020\/11\/Screenshot-2020-11-23-at-10.42.05.png\" alt width=\"809\" height=\"603\" sizes=\"(max-width: 809px) 100vw, 809px\" data-lazy=\"true\" data-srcset=\"https:\/\/cdn0.tnwcdn.com\/wp-content\/blogs.dir\/1\/files\/2020\/11\/Screenshot-2020-11-23-at-10.42.05.png 1400w, https:\/\/cdn0.tnwcdn.com\/wp-content\/blogs.dir\/1\/files\/2020\/11\/Screenshot-2020-11-23-at-10.42.05-280x210.png 280w, https:\/\/cdn0.tnwcdn.com\/wp-content\/blogs.dir\/1\/files\/2020\/11\/Screenshot-2020-11-23-at-10.42.05-362x270.png 362w, https:\/\/cdn0.tnwcdn.com\/wp-content\/blogs.dir\/1\/files\/2020\/11\/Screenshot-2020-11-23-at-10.42.05-181x135.png 181w, https:\/\/cdn0.tnwcdn.com\/wp-content\/blogs.dir\/1\/files\/2020\/11\/Screenshot-2020-11-23-at-10.42.05-796x594.png 796w\"><\/figure>\n<\/p>\n<p><h2><span>Code explanation:<\/span><\/h2>\n<\/p>\n<ul>\n<li>As usual, import Scrapy.<\/li>\n<li>Create a class that inherits the properties and functionality of<span>&nbsp;<\/span><strong>scrapy.Spider<\/strong>.<\/li>\n<li>Give a unique name to our spider. Here, I used<span>&nbsp;<\/span><code>LiveScoreT<\/code><span>&nbsp;<\/span>as we will only be extracting the tournament names.<\/li>\n<li>The next step is to provide the URL of Livescore.cz.<\/li>\n<li>At last, the<span>&nbsp;<\/span><code>parse()<\/code><span>&nbsp;<\/span>function loop through all the matched elements that contains the<span>&nbsp;<\/span><strong>tournament name<\/strong><span>&nbsp;<\/span>and join it together using<span>&nbsp;<\/span><code>yield<\/code>. Finally, we receive all the tournament names that have matches today. A point to be noted is that this time I used<span>&nbsp;<\/span><strong>CSS<\/strong><span>&nbsp;<\/span>selector instead of<span>&nbsp;<\/span><strong>XPath<\/strong>.<\/li>\n<\/ul>\n<p id=\"run-the-newly-created-spider\"><strong>Run the newly created spider:<\/strong><\/p>\n<p>It\u2019s time to see our spider in action. Run the below command to let the spider crawl the home page of Livescore.cz website. The web scraping result will then be added inside a new file called<span>&nbsp;<\/span><strong>ls_t.json<\/strong><span>&nbsp;<\/span>in JSON format.<\/p>\n<div class=\"highlight\" readability=\"7\">\n<pre><code class=\"language-shell\" data-lang=\"shell\">pipenv run scrapy crawl LiveScoreT -o ls_t.json <\/code><\/pre>\n<\/div>\n<p>By now you know the drill.<\/p>\n<p id=\"results-1\"><strong>Results:<\/strong><\/p>\n<p>This is what our web spider has extracted on 18 November 2020 from<span>&nbsp;<\/span><a href=\"https:\/\/www.livescore.cz\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Livescore.cz<\/a><span>&nbsp;<\/span>. Remember that the output may change every day.<\/p>\n<p><span><\/p>\n<figure class=\"post-image post-mediaBleed alignnone\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-1329046 lazy\" src=\"https:\/\/cdn0.tnwcdn.com\/wp-content\/blogs.dir\/1\/files\/2020\/11\/Screenshot-2020-11-23-at-10.43.49.png\" alt width=\"795\" height=\"448\" sizes=\"(max-width: 795px) 100vw, 795px\" data-lazy=\"true\" data-srcset=\"https:\/\/cdn0.tnwcdn.com\/wp-content\/blogs.dir\/1\/files\/2020\/11\/Screenshot-2020-11-23-at-10.43.49.png 1392w, https:\/\/cdn0.tnwcdn.com\/wp-content\/blogs.dir\/1\/files\/2020\/11\/Screenshot-2020-11-23-at-10.43.49-280x158.png 280w, https:\/\/cdn0.tnwcdn.com\/wp-content\/blogs.dir\/1\/files\/2020\/11\/Screenshot-2020-11-23-at-10.43.49-479x270.png 479w, https:\/\/cdn0.tnwcdn.com\/wp-content\/blogs.dir\/1\/files\/2020\/11\/Screenshot-2020-11-23-at-10.43.49-240x135.png 240w, https:\/\/cdn0.tnwcdn.com\/wp-content\/blogs.dir\/1\/files\/2020\/11\/Screenshot-2020-11-23-at-10.43.49-796x448.png 796w, https:\/\/cdn0.tnwcdn.com\/wp-content\/blogs.dir\/1\/files\/2020\/11\/Screenshot-2020-11-23-at-10.43.49-1200x675.png 1200w\"><\/figure>\n<p><\/span><\/p>\n<h2 id=\"a-more-advanced-use-case\">A more advanced use case<\/h2>\n<p>In this section, instead of just retrieving the tournament name, we will go the next mile and get complete details of tournaments and their matches.<\/p>\n<p>Create a new file inside<span>&nbsp;<\/span><strong>\/web_scraper\/web_scraper\/spiders\/<\/strong><span>&nbsp;<\/span>and name it as<span>&nbsp;<\/span><strong>livescore.py<\/strong>. Now, enter the below code in it.<\/p>\n<div class=\"highlight\">\n<pre><figure class=\"post-image post-mediaBleed alignnone\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-1329048 lazy\" src=\"https:\/\/cdn0.tnwcdn.com\/wp-content\/blogs.dir\/1\/files\/2020\/11\/Screenshot-2020-11-23-at-10.44.47.png\" alt width=\"706\" height=\"1536\" sizes=\"(max-width: 706px) 100vw, 706px\" data-lazy=\"true\" data-srcset=\"https:\/\/cdn0.tnwcdn.com\/wp-content\/blogs.dir\/1\/files\/2020\/11\/Screenshot-2020-11-23-at-10.44.47.png 706w, https:\/\/cdn0.tnwcdn.com\/wp-content\/blogs.dir\/1\/files\/2020\/11\/Screenshot-2020-11-23-at-10.44.47-97x210.png 97w, https:\/\/cdn0.tnwcdn.com\/wp-content\/blogs.dir\/1\/files\/2020\/11\/Screenshot-2020-11-23-at-10.44.47-124x270.png 124w, https:\/\/cdn0.tnwcdn.com\/wp-content\/blogs.dir\/1\/files\/2020\/11\/Screenshot-2020-11-23-at-10.44.47-62x135.png 62w\"><\/figure><\/pre>\n<\/div>\n<h3 id=\"code-explanation-2\">Code explanation:<\/h3>\n<p>The code structure of this file is the same as our previous examples. Here, we just updated the<span>&nbsp;<\/span><code>parse()<\/code><span>&nbsp;<\/span>method with a new functionality.<\/p>\n<p>Basically, we extracted all the HTML<span>&nbsp;<\/span><code>&lt;tr&gt;&lt;\/tr&gt;<\/code><span>&nbsp;<\/span>elements from the page. Then, we loop through them to find out whether it is a tournament or a match. If it is a tournament, we extracted its name. In the case of a match, we extracted its \u201ctime,\u201d \u201cstate,\u201d and \u201cname and score of both teams.\u201d<\/p>\n<h3 id=\"run-the-example\">Run the example:<\/h3>\n<p>Type the following command inside the console and execute it.<\/p>\n<div class=\"highlight\" readability=\"7\">\n<pre><code class=\"language-shell\" data-lang=\"shell\">pipenv run scrapy crawl LiveScore -o ls.json <\/code><\/pre>\n<\/div>\n<h3 id=\"results-2\">Results:<\/h3>\n<p>Here is a sample of what has been retrieved:<\/p>\n<div class=\"highlight\">\n<pre><figure class=\"post-image post-mediaBleed alignnone\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-1329049 lazy\" src=\"https:\/\/cdn0.tnwcdn.com\/wp-content\/blogs.dir\/1\/files\/2020\/11\/Screenshot-2020-11-23-at-10.45.33.png\" alt width=\"702\" height=\"990\" sizes=\"(max-width: 702px) 100vw, 702px\" data-lazy=\"true\" data-srcset=\"https:\/\/cdn0.tnwcdn.com\/wp-content\/blogs.dir\/1\/files\/2020\/11\/Screenshot-2020-11-23-at-10.45.33.png 702w, https:\/\/cdn0.tnwcdn.com\/wp-content\/blogs.dir\/1\/files\/2020\/11\/Screenshot-2020-11-23-at-10.45.33-149x210.png 149w, https:\/\/cdn0.tnwcdn.com\/wp-content\/blogs.dir\/1\/files\/2020\/11\/Screenshot-2020-11-23-at-10.45.33-191x270.png 191w, https:\/\/cdn0.tnwcdn.com\/wp-content\/blogs.dir\/1\/files\/2020\/11\/Screenshot-2020-11-23-at-10.45.33-96x135.png 96w\"><\/figure><\/pre>\n<\/div>\n<p>Now with this data, we can do anything we want, like use it to train our own neural network to predict future games.<\/p>\n<h2 id=\"conclusion\">Conclusion<\/h2>\n<p>Data Analysts often use<span>&nbsp;<\/span><strong>web scraping<\/strong><span>&nbsp;<\/span>because it helps them in collecting data to predict the future. Similarly, businesses use it to extract emails from web pages as it is an effective way of lead generation. We can even use it to monitor the prices of products.<\/p>\n<p>In other words, web scraping has many use cases and<span>&nbsp;<\/span><strong>Python<\/strong><span>&nbsp;<\/span>is completely capable to do that.<\/p>\n<p>So, what are you waiting for? Try scraping your favorite websites now.<\/p>\n<p><i><span>This <\/span><\/i><a href=\"https:\/\/livecodestream.dev\/post\/2020-11-18-how-to-turn-the-web-into-data-with-python-and-scrapy\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\"><i><span>article<\/span><\/i><\/a><i><span> was originally published on <\/span><\/i><a href=\"https:\/\/livecodestream.dev\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\"><i><span>Live Code Stream<\/span><\/i><\/a><i><span> by <\/span><\/i><a href=\"https:\/\/www.linkedin.com\/in\/bajcmartinez\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\"><i><span>Juan Cruz Martinez<\/span><\/i><\/a><i><span> (twitter: <\/span><\/i><a href=\"https:\/\/twitter.com\/bajcmartinez\" target=\"_blank\" rel=\"nofollow noopener noreferrer\"><i><span>@bajcmartinez<\/span><\/i><\/a><i><span>), founder and publisher of Live Code Stream, entrepreneur, developer, author, speaker, and doer of things.<\/span><\/i><\/p>\n<p><a href=\"https:\/\/livecodestream.dev\/subscribe\" target=\"_blank\" rel=\"nofollow noopener noreferrer\"><i><span>Live Code Stream<\/span><\/i><\/a><i><span> is also available as a free weekly newsletter. Sign up for updates on everything related to programming, AI, and computer science in general.<\/span><\/i><\/p>\n<p> <a href=\"https:\/\/thenextweb.com\/syndication\/2020\/11\/23\/a-beginners-guide-to-web-scraping-with-python-and-scrapy\/\">Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Since their inception,&nbsp;websites&nbsp;are used to share information. Whether it is a Wikipedia article, YouTube channel, Instagram account, or a Twitter handle. They all are packed with interesting data that is available for&#8230;<\/p>\n","protected":false},"author":1,"featured_media":1324,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[1],"tags":[],"_links":{"self":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/posts\/1323"}],"collection":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1323"}],"version-history":[{"count":0,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/posts\/1323\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/media\/1324"}],"wp:attachment":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1323"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1323"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1323"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}