SoFunction
Updated on 2024-11-15

python read and write csv format file sample code

In the data analysis often need to access the data from the csv format file and write the data to the csv file. The data in the csv file will be read directly into the dict type and DataFrame is a very convenient and trouble-free practice, the following code to iris data as an example.

csv file read as dict

coding

 # -*- coding: utf-8 -*-
import csv
with open('E:/') as csvfile:
reader = (csvfile, fieldnames=None) # fieldnames defaults to None, if the csv file you are reading does not have a table header, then you need to specify the
list_1 = [e for e in reader] # Each row of data is stored as a dict in a chained table
()
print list_1[0]

exports

 {'': '1.4', '': '5.1', '': '0.2', '': '3.5', 'Species': 'setosa'}

If each piece of data read in needs to be processed separately and the amount of data is large, it is recommended to process them one by one and then put them in.

 list_1 = list()
for e in reader:
 list_1.append(your_func(e)) # your_funcProcessing function for each piece of data 

Multiple data of type dict written to csv file

coding

 # Data
data = [
{'': '1.4', '': '5.1', '': '0.2', '': '3.5', 'Species': 'setosa'},
{'': '1.4', '': '4.9', '': '0.2', '': '3', 'Species': 'setosa'},
{'': '1.3', '': '4.7', '': '0.2', '': '3.2', 'Species': 'setosa'},
{'': '1.5', '': '4.6', '': '0.2', '': '3.1', 'Species': 'setosa'}
]
# Table header
header = ['', '', '', '', 'Species']
print len(data)
with open('E:/', 'wb') as dstfile: # Write method select wb, otherwise there are blank lines
 writer = (dstfile, fieldnames=header)
 () # Write to table header
 (data) # Batch Write
()

The above code will write the data to the csv file as a whole, if there is a lot of data and you want to see how much data has been written in real time you can use the writersows function.

Read csv file as DataFrame

coding

 # Read csv file as DataFrame
import pandas as pd
dframe = .from_csv('E:/')

It can also be slightly convoluted:

import csv
import pandas as pd
with open('E:/') as csvfile:
 reader = (csvfile, fieldnames=None) # fieldnames defaults to None, if the csv file you are reading does not have a table header, then you need to specify the
 list_1 = [e for e in reader] # Each row of data is stored as a dict in a chained table
()
dfrme = .from_records(list_1) 

Read specified csv file from zip file as DataFrame

The file contains and other files, now read the file directly without extracting it as DataFrame.

import pandas as pd
import zipfile
z_file = ('E:/')
dframe = pd.read_csv(z_file.open(''))
z_file.close()
print dframe 

DataFrame writing to csv file

dfrme.to_csv('E:/', index=False) # Don't number each line 

Read txt file as DataFrame

import pandas as pd
# `path` is the file path or file handle, `header` is whether the first line of the file is a table header, `delimiter` is the delimiter for each field, and `dtype` is the type of storage for the data once it is read in.
frame = pd.read_table(path, header=None, index_col=False, delimiter='\t', dtype=str)

This is the whole content of this article.