Get a separate table with the following code:
#!/usr/bin/env python3 # _*_ coding=utf-8 _*_ import csv from import urlopen from bs4 import BeautifulSoup from import HTTPError try: html = urlopen("/wiki/Comparison_of_text_editors") except HTTPError as e: print("not found") bsObj = BeautifulSoup(html,"") table = ("table",{"class":"wikitable"})[0] if table is None: print("no table"); exit(1) rows = ("tr") csvFile = open("",'wt',newline='',encoding='utf-8') writer = (csvFile) try: for row in rows: csvRow = [] for cell in (['td','th']): (cell.get_text()) (csvRow) finally: ()
Get all tables with the following code:
#!/usr/bin/env python3 # _*_ coding=utf-8 _*_ import csv from import urlopen from bs4 import BeautifulSoup from import HTTPError try: html = urlopen("/wiki/Comparison_of_text_editors") except HTTPError as e: print("not found") bsObj = BeautifulSoup(html,"") tables = ("table",{"class":"wikitable"}) if tables is None: print("no table"); exit(1) i = 1 for table in tables: fileName = "table%" % i rows = ("tr") csvFile = open(fileName,'wt',newline='',encoding='utf-8') writer = (csvFile) try: for row in rows: csvRow = [] for cell in (['td','th']): (cell.get_text()) (csvRow) finally: () i += 1
Above this python get page form data stored in the csv is all that I have shared with you, I hope to give you a reference, and I hope you support me more.