提问人:Bastian Borum Andersen 提问时间:10/20/2023 最后编辑:Bastian Borum Andersen 更新时间:10/20/2023 访问量:63
python 中 m 个元素的 n 个列表的随机抽样
Random sampling of n lists of m elements in python
问:
我编写了这段代码,它在 python 中创建 n 个 m 元素列表的所有组合,对给定数量的唯一组合(最大可能或 1000)进行采样,并将其输出到 excel 中。它基本上有效,但问题是当产品(m_i)变得非常大时,它非常慢。
一个实际的用例可能是我有 32 个列表,每个列表有 2-3 个元素,我需要从中抽取 1000 个独特的组合。这可能是 100 亿个组合,但创建所有这些组合的速度很慢,而我实际上只需要 1000 个独特的组合。
我确实考虑过只创建随机样本并检查我是否已经创建了这个样本,但是当样本数量接近可能的排列数量时,这会变得很慢。
import pandas as pd
df = pd.read_excel('Variables.xlsx',sheet_name="Variables" ,index_col=0)
df_out = pd.DataFrame(columns=df.index)
df.shape[0]
def for_recursive(number_of_loops, range_list, execute_function, current_index=0, iter_list = []):
if iter_list == []:
iter_list = [0]*number_of_loops
if current_index == number_of_loops-1:
for iter_list[current_index] in range_list.iloc[current_index].dropna():
execute_function(iter_list)
else:
for iter_list[current_index] in range_list.iloc[current_index].dropna():
for_recursive(number_of_loops, iter_list = iter_list, range_list = range_list, current_index = current_index+1, execute_function = execute_function)
def do_whatever(index_list):
df_out.loc[len(df_out)] = index_list
for_recursive(range_list = df, execute_function = do_whatever , number_of_loops=len(df))
df_out = df_out.sample(n=min(len(df_out),1000))
with pd.ExcelWriter("Variables.xlsx", engine="openpyxl", mode="a", if_sheet_exists="replace") as writer:
df_out.to_excel(writer, 'Simulations', index=False)
答:
0赞
Dan Nagle
10/20/2023
#1
利用标准库功能。该模块可以生成数据中所有可能组合的列表。itertools
import pandas as pd
data = pd.DataFrame({'A': ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j'],
'B': [1, 4, 6, 33, 0, 0, 0, 0, 0, 0],
'C': [2, 6, 8, 44, 1, 1, 1, 1, 1, 1],
'D': [3, 0, 0, 55, 0, 0, 0, 0, 0, 0],
})
from itertools import product
full_collection = (list(product(data['A'], data['B'], data['C'], data['D'])))
print(len(full_collection)) # 10000
该函数将生成独特的样本,而不会重复。random.sample
import random
samples = random.sample(full_collection, 1000)
编辑:可能的替代解决方案
不要创建所有可能组合的列表,而是根据每个数据集列中的唯一值生成随机组合。生成器表达式可确保内存效率高的解决方案,但不能保证每个样本都是唯一的。
sample_size = 1000
# Get the column names
col_names = tuple(data.columns)
# Create a dictionary of unique values in each column
unique_values = dict()
for col_name in col_names:
unique_values[col_name] = tuple(data[col_name].unique())
# Create a sample generator
samples_gen = (tuple([random.choice(unique_values[col_name])
for col_name in col_names])
for _ in range(sample_size))
# Iterate through the generated samples
while True:
try:
sample = next(samples_gen)
except StopIteration:
break
# Do something with the sample
print(sample)
使用闭包函数创建更简单的迭代器:
def sample_generator_from_dataframe(data, col_names=None):
if col_names is None:
col_names = tuple(data.columns)
unique_values = dict()
for col_name in col_names:
unique_values[col_name] = tuple(data[col_name].unique())
# An infinite sample generator
def _generator():
while True:
yield tuple([random.choice(unique_values[col_name])
for col_name in col_names])
return iter(_generator)
# Initialise the generator with the dataframe content
new_sample_gen = sample_generator_from_dataframe(data)
# Iterate over generated samples
for _ in range(sample_size):
sample = next(new_sample_gen)
# Do something with the generated sample
print(sample)
评论
0赞
Bastian Borum Andersen
10/20/2023
这比我做的要好得多:)然而,当我开始在数据集中有很多参数(例如,A 到 T 而不仅仅是 A 到 D)时,它仍然会变慢。是否可以不创建完整的集合,而只创建随机部分?
0赞
Dan Nagle
10/20/2023
您可以从每列中的唯一值生成样本。我已经用代码更新了答案,以获得这种可能的解决方案。
0赞
JonSG
10/20/2023
#2
只要你不选择任何接近可能性数量的东西,那么这似乎相对较快。
import string
import random
## ------------------
## create a bunch of short lists
## ------------------
data = [random.sample(string.ascii_letters, 3) for _ in range(32)]
## ------------------
## ------------------
## out of the trillions of possibilities, select n.
## n should not approach the number of possibilities
## ------------------
n = 1000
results = set()
while len(results) <= n:
results.add(tuple(random.choice(row) for row in data))
## ------------------
for row in results:
print("".join(row))
评论
for loops