SoFunction
Updated on 2024-11-19

Pandas+Numpy+Sklearn Random Number Implementation Example

This article documents how to use Python, pandas, numpy, and scikit-learn to randomly disrupt, extract, and cut data. The main methods are included:

  • sample
  • shuffle
  • train_test_split

Import data

In [1]:

import pandas as pd
import numpy as np
import random  # Random module

import plotly_express as px  # Visualization libraries
import plotly.graph_objects as go

built-in data

A consumption dataset built into the plotly library was used:

In [2]:

df = ()
()

Basic Information

In [3]:

Out[3]:

(244, 7)

In [4]:

columns = 
columns

Out[4]:

Index(['total_bill', 'tip', 'sex', 'smoker', 'day', 'time', 'size'], dtype='object')

sample realization

direction of travel

In [5]:

Randomly select a row of records:

()  # Randomly select a row of records

Randomly select multiple rows of data:

Proportional random sampling is achieved through the parameter frac:

(frac=0.05)

direction of travel

The main point is to choose different numbers or proportions of attributes; the overall number of rows is constant

In [8]:

(3, axis=1)  # Extract on column attributes

shuffle implementation

scikit-earn's shuffle

In [9]:

from  import shuffle

In [10]:

shuffle(df)  # Disrupt the data

random module shuffle

In [11]:

length = list(range(len(df)))  # The original length as an index
length[:5]

Out[11]:

[0, 1, 2, 3, 4]

In [12]:

(length)  # Disrupt the index

In [13]:

length[:5]

Out[13]:

[136, 35, 207, 127, 29]  # Disrupted results

In [14]:

[length]   # Fetch data through a mangled index

numpy implementation

In [15]:

# First, we're gonna break up each index
(len(df))

Out[15]:

array([223,  98, 238,  17, 101,  26, 122, 212,  27,  79, 210, 147, 176,
        82, 164, 142, 141, 219,   6,  63, 185, 112, 158, 188, 242, 207,
        45,  55, 178, 150, 217,  32,  16, 160, 157, 234,  95, 174,  93,
        52,  57, 220, 216, 230,  35,  86, 125, 114, 100,  73,  83,  88,
        34,   7,  40, 115,  97, 165,  84,  18, 197, 151, 135, 121,  72,
       173, 228, 143, 227,   9, 183,  56,  23, 237, 136, 106, 133, 189,
       139,   0, 208,  74, 166,   4,  68,  12,  71,  85, 172, 138, 149,
       144, 232, 186,  99, 130,  41, 201, 204,  10, 167, 195,  66, 159,
       213,  87, 103, 117,  31, 211, 190,  24, 243, 127,  48, 218, 233,
       113,  81, 235, 229, 206,  96,  46, 222,  50, 156, 180, 214, 124,
       240, 140,  89, 225,   2, 120,  58, 169, 193,  39, 102, 104, 148,
       184, 170, 152, 153, 146, 179, 137, 129,  64,   3,  65, 128,  90,
       110,  14, 226, 181, 131, 203, 221,  80,  51,  94, 231,  44, 108,
        43, 145,  47,  75, 162, 163,  69, 126, 200,   1, 123,  37, 205,
       111,  25,  91,  11,  42,  67, 118, 196, 161,  28, 116, 105,  33,
        38,  78,  76, 224,  20, 202, 171, 177, 107,   8, 209, 239,  77,
       241, 154,   5, 198,  92,  61, 182,  36,  70,  22,  54, 187, 175,
       119, 215,  49, 134,  21,  60,  62, 168,  59, 155, 194, 109, 132,
        19, 199,  29, 191,  13,  30, 192, 236,  15,  53])

In [16]:

# Selection of data by disrupted indexes

[(len(df))]

train_test_split implementation

from sklearn.model_selection import train_test_split

data = []

for i in train_test_split(df, test_size=0.2):
    (i)

In [18]:

The first figure is 80%:

data[0]   # 80% of data

The remaining 20% of the data:

To this article on Pandas + Numpy + Sklearn random number of examples of the implementation of the article is introduced to this, more related Pandas + Numpy + Sklearn random number of content, please search for my previous articles or continue to browse the following articles I hope you will support me more in the future!