Datawhale AI 夏令营 催化反应产率预测
削尖脑袋跑得快 2024-08-07 14:31:03 阅读 94
Task从零入门AI for Science(AI+化学)
最新分数:0.3846
比赛链接:上海科学智能研究院 (sais.com.cn)
一、比赛任务
本次比赛提供在药物合成中常见的多种催化反应实验数据,其中包括反应的底物、包括催化剂在内的反应添加剂、反应溶剂以及反应产物,期待选手通过分析反应数据,利用机器学习、深度学习算法或者大语言模型,建立产率预测模型,从而辅助未知新反应的反应条件筛选。
任务:构建一个能够准确预测碳氮成键反应产率的预测模型
通过对反应中所包含的反应底物、添加剂、溶剂以及产物进行合理的特征化,运用机器学习模型或者深度学习模型拟合预测反应的产率。或者利用训练集数据对开源大语言模型进行微调以预测反应的产率
部分数据如下所示:
rxnid | Reactant1 | Reactant2 | Product | Additive | Solvent | Yield |
---|---|---|---|---|---|---|
train1 | c1ccc2c(c1)Nc1ccccc1O2 | Brc1ccccc1I | Brc1ccccc1N1c2ccccc2Oc2ccccc21 | CC(C)(C)[O-].CC(C)(C)[PH+](C(C)(C)C)C(C)(C)C.F[B-](F)(F)F.F[B-](F)(F)F.O=C(C=Cc1ccccc1)C=Cc1ccccc1.O=C(C=Cc1ccccc1)C=Cc1ccccc1.[H+].[Na+].[Pd] | Cc1ccccc1 | 0.78 |
train2 | c1ccc2c(c1)Nc1ccccc1O2 | Brc1ccccc1I | Brc1ccccc1N1c2ccccc2Oc2ccccc21 | C1COCCOCCOCCOCCOCCO1.O=C([O-])[O-].[Cu+].[I-].[K+].[K+]Clc1ccccc1Cl | Clc1ccccc1Cl | 0.90 |
train3 | c1ccc2c(c1)Nc1ccccc1O2 | Brc1ccccc1I | Brc1ccccc1N1c2ccccc2Oc2ccccc21 | CC(=O)[O-].CC(=O)[O-].CC(C)(C)[O-].CC(C)(C)[PH+](C(C)(C)C)C(C)(C)C.F[B-](F)(F)F.F[B-](F)(F)F.[H+].[Na+].[Pd+2] | CC1(C)C=CC=CC1 | 0.85 |
二、Task1:baseline代码
1)导入基本包
<code>import pickle # 导入pickle模块,用于序列化和反序列化Python对象
import pandas as pd # 导入pandas库,用于数据处理和分析
from tqdm import tqdm # 从tqdm库导入tqdm,用于显示进度条
from sklearn.ensemble import RandomForestRegressor # 从sklearn库导入RandomForestRegressor,用于随机森林回归模型
from rdkit.Chem import rdMolDescriptors # 从rdkit.Chem导入rdMolDescriptors,用于计算分子描述符
from rdkit import RDLogger, Chem # 从rdkit导入RDLogger和Chem,用于处理分子和日志记录
import numpy as np # 导入numpy库,用于科学计算
RDLogger.DisableLog('rdApp.*') # 禁用RDKit的日志记录,避免不必要的输出
2)将SMILES字符串列表转换为分子指纹向量的列表
def mfgen(mol,nBits=2048, radius=2):
'''
Parameters
----------
mol : mol
RDKit mol object.
nBits : int
Number of bits for the fingerprint.
radius : int
Radius of the Morgan fingerprint.
Returns
-------
mf_desc_map : ndarray
ndarray of molecular fingerprint descriptors.
'''
fp = rdMolDescriptors.GetMorganFingerprintAsBitVect(mol,radius=radius,nBits=nBits)
return np.array(list(map(eval,list(fp.ToBitString()))))
def vec_cpd_lst(smi_lst):
smi_set = list(set(smi_lst))
smi_vec_map = {}
for smi in tqdm(smi_set):
mol = Chem.MolFromSmiles(smi)
smi_vec_map[smi] = mfgen(mol)
smi_vec_map[''] = np.zeros(2048)
vec_lst = [smi_vec_map[smi] for smi in smi_lst]
return np.array(vec_lst)
解释:mfgen
函数生成一个分子的Morgan指纹。它接受一个RDKit的分子对象、指纹的位数(默认为2048位)和半径(默认为2),然后计算并返回该分子的指纹描述符,作为一个numpy数组
vec_cpd_lst
函数将SMILES字符串列表转换为分子指纹向量的列表。它先创建一个包含唯一SMILES字符串的集合,然后计算每个SMILES对应的指纹向量,并将其存储在字典中。最后,它将输入列表中的每个SMILES转换为其指纹向量,并以numpy数组的形式返回这些向量。
3)导入训练集与测试集
dataset_dir = './dataset' # Change this to your dataset directory
train_df = pd.read_csv(f'{dataset_dir}/round1_train_data.csv')
test_df = pd.read_csv(f'{dataset_dir}/round1_test_data.csv')
print(f'Training set size: {len(train_df)}, test set size: {len(test_df)}')
Training set size: 23538, test set size: 2616
4)处理训练集和测试集
train_rct1_smi = train_df['Reactant1'].to_list()
train_rct2_smi = train_df['Reactant2'].to_list()
train_add_smi = train_df['Additive'].to_list()
train_sol_smi = train_df['Solvent'].to_list()
train_rct1_fp = vec_cpd_lst(train_rct1_smi)
train_rct2_fp = vec_cpd_lst(train_rct2_smi)
train_add_fp = vec_cpd_lst(train_add_smi)
train_sol_fp = vec_cpd_lst(train_sol_smi)
train_x = np.concatenate([train_rct1_fp,train_rct2_fp,train_add_fp,train_sol_fp],axis=1)
train_y = train_df['Yield'].to_numpy()
test_rct1_smi = test_df['Reactant1'].to_list()
test_rct2_smi = test_df['Reactant2'].to_list()
test_add_smi = test_df['Additive'].to_list()
test_sol_smi = test_df['Solvent'].to_list()
test_rct1_fp = vec_cpd_lst(test_rct1_smi)
test_rct2_fp = vec_cpd_lst(test_rct2_smi)
test_add_fp = vec_cpd_lst(test_add_smi)
test_sol_fp = vec_cpd_lst(test_sol_smi)
test_x = np.concatenate([test_rct1_fp,test_rct2_fp,test_add_fp,test_sol_fp],axis=1)
100%|██████████████████████████████████████████████████████████████████████████████| 6081/6081 [05:16<00:00, 19.24it/s]
100%|██████████████████████████████████████████████████████████████████████████████| 8763/8763 [07:22<00:00, 19.79it/s]
100%|██████████████████████████████████████████████████████████████████████████████| 1284/1284 [01:18<00:00, 16.26it/s]
100%|████████████████████████████████████████████████████████████████████████████████| 432/432 [00:29<00:00, 14.48it/s]
100%|██████████████████████████████████████████████████████████████████████████████| 1257/1257 [00:54<00:00, 23.21it/s]
100%|██████████████████████████████████████████████████████████████████████████████| 1743/1743 [00:46<00:00, 37.15it/s]
100%|████████████████████████████████████████████████████████████████████████████████| 404/404 [00:14<00:00, 27.65it/s]
100%|████████████████████████████████████████████████████████████████████████████████| 143/143 [00:09<00:00, 15.24it/s]
使用vec_cpd_lst
函数将测试数据集中每种反应物、添加剂和溶剂的SMILES字符串转换为分子指纹表示。这将为每个SMILES字符串生成一个指纹向量,并将所有这些指纹向量存储在相应的变量中。
将测试数据集中反应物1、反应物2、添加剂和溶剂的分子指纹向量在列(特征)方向上拼接,形成一个大的特征矩阵test_x
,其中每行对应一个测试样本,每列对应一个特征(分子指纹位)。
5)模型的训练及保存
# Model fitting
model = RandomForestRegressor(n_estimators=100,max_depth=10,min_samples_split=2,min_samples_leaf=1,n_jobs=-1)
model.fit(train_x,train_y)
# Saving model to file
with open('./random_forest_model.pkl', 'wb') as file:
pickle.dump(model, file)
# Load model from file
with open('random_forest_model.pkl', 'rb') as file:
loaded_model = pickle.load(file)
# Inference
test_pred = loaded_model.predict(test_x)
dataline中使用了随机森林回归模型,以下是基本介绍:
基本概念
集成学习:随机森林是一种集成学习方法,它通过结合多个学习器(决策树)来改善模型的预测性能和稳定性。决策树:决策树是一种树状结构的模型,用于将数据分割成更小的部分,从而做出预测。每个节点表示对某个特征的判断,叶子节点表示预测结果。随机性:随机森林通过引入随机性来增强模型的泛化能力。这包括两方面:
样本随机性:通过对训练数据进行有放回的随机抽样(称为Bootstrap抽样)来训练每棵树。特征随机性:在每个节点分裂时,只考虑一部分随机选择的特征,而不是全部特征。
工作原理
构建多个决策树:通过对训练数据进行Bootstrap抽样,构建多个决策树。每棵树在不同的样本上训练,同时在每个节点的分裂过程中只考虑部分特征。集成决策:对于回归任务,随机森林模型通过对所有树的预测结果取平均值来进行最终的预测。这个过程减少了单个树可能出现的过拟合现象,从而提高了整体模型的稳定性和准确性。
优点
高准确性:通过结合多个决策树,随机森林通常比单个决策树具有更高的准确性。抗过拟合:由于随机森林在训练过程中引入了随机性,它对训练数据的过拟合风险较低。处理高维数据:随机森林能够处理高维数据集,并且不需要对数据进行标准化或归一化处理。特征重要性:随机森林可以评估每个特征的重要性,有助于进行特征选择。
缺点
训练时间长:由于需要构建多个决策树,随机森林的训练时间可能较长,尤其是在数据量大或特征多的情况下。预测速度慢:由于需要结合多个决策树的结果,随机森林的预测速度相对较慢。
6)保存结果
ans_str_lst = ['rxnid,Yield']
for idx,y in enumerate(test_pred):
ans_str_lst.append(f'test{idx+1},{y:.4f}')
with open('./submit.txt','w') as fw:
fw.writelines('\n'.join(ans_str_lst))
运行后会生成一个文件submit.txt ,然后在提交平台提交就可以得到分数啦!
这里我得到了0.1996分,哈哈好像大家的分数都不太一样,有分数还是挺好玩的。
7)完整代码展示:
<code>import pickle
import pandas as pd
from tqdm import tqdm
from sklearn.ensemble import RandomForestRegressor
from rdkit.Chem import rdMolDescriptors
from rdkit import RDLogger,Chem
import numpy as np
RDLogger.DisableLog('rdApp.*')
def mfgen(mol,nBits=2048, radius=2):
'''
Parameters
----------
mol : mol
RDKit mol object.
nBits : int
Number of bits for the fingerprint.
radius : int
Radius of the Morgan fingerprint.
Returns
-------
mf_desc_map : ndarray
ndarray of molecular fingerprint descriptors.
'''
fp = rdMolDescriptors.GetMorganFingerprintAsBitVect(mol,radius=radius,nBits=nBits)
return np.array(list(map(eval,list(fp.ToBitString()))))
def vec_cpd_lst(smi_lst):
smi_set = list(set(smi_lst))
smi_vec_map = {}
for smi in tqdm(smi_set):
mol = Chem.MolFromSmiles(smi)
smi_vec_map[smi] = mfgen(mol)
smi_vec_map[''] = np.zeros(2048)
vec_lst = [smi_vec_map[smi] for smi in smi_lst]
return np.array(vec_lst)
## Vectorization
dataset_dir = './dataset' # Change this to your dataset directory
train_df = pd.read_csv(f'{dataset_dir}/round1_train_data.csv')
test_df = pd.read_csv(f'{dataset_dir}/round1_test_data.csv')
print(f'Training set size: {len(train_df)}, test set size: {len(test_df)}')
train_rct1_smi = train_df['Reactant1'].to_list()
train_rct2_smi = train_df['Reactant2'].to_list()
train_add_smi = train_df['Additive'].to_list()
train_sol_smi = train_df['Solvent'].to_list()
train_rct1_fp = vec_cpd_lst(train_rct1_smi)
train_rct2_fp = vec_cpd_lst(train_rct2_smi)
train_add_fp = vec_cpd_lst(train_add_smi)
train_sol_fp = vec_cpd_lst(train_sol_smi)
train_x = np.concatenate([train_rct1_fp,train_rct2_fp,train_add_fp,train_sol_fp],axis=1)
train_y = train_df['Yield'].to_numpy()
test_rct1_smi = test_df['Reactant1'].to_list()
test_rct2_smi = test_df['Reactant2'].to_list()
test_add_smi = test_df['Additive'].to_list()
test_sol_smi = test_df['Solvent'].to_list()
test_rct1_fp = vec_cpd_lst(test_rct1_smi)
test_rct2_fp = vec_cpd_lst(test_rct2_smi)
test_add_fp = vec_cpd_lst(test_add_smi)
test_sol_fp = vec_cpd_lst(test_sol_smi)
test_x = np.concatenate([test_rct1_fp,test_rct2_fp,test_add_fp,test_sol_fp],axis=1)
## Model fitting and saving
# Model fitting
model = RandomForestRegressor(n_estimators=100,max_depth=10,min_samples_split=2,min_samples_leaf=1,n_jobs=-1)
model.fit(train_x,train_y)
# Saving model to file
with open('./random_forest_model.pkl', 'wb') as file:
pickle.dump(model, file)
# Load model from file
with open('random_forest_model.pkl', 'rb') as file:
loaded_model = pickle.load(file)
# Inference
test_pred = loaded_model.predict(test_x)
## Answer generation
ans_str_lst = ['rxnid,Yield']
for idx,y in enumerate(test_pred):
ans_str_lst.append(f'test{idx+1},{y:.4f}')
with open('./submit.txt','w') as fw:
fw.writelines('\n'.join(ans_str_lst))
我又用了lightgbm模型尝试训练了一下,分数提高了一些,参数改一改可能会更高!可惜没有提交次数了
import pickle
import pandas as pd
from tqdm import tqdm
from rdkit.Chem import rdMolDescriptors
from rdkit import RDLogger, Chem
import numpy as np
import lightgbm as lgb
RDLogger.DisableLog('rdApp.*')
def mfgen(mol, nBits=2048, radius=2):
'''
Parameters
----------
mol : mol
RDKit mol object.
nBits : int
Number of bits for the fingerprint.
radius : int
Radius of the Morgan fingerprint.
Returns
-------
mf_desc_map : ndarray
ndarray of molecular fingerprint descriptors.
'''
fp = rdMolDescriptors.GetMorganFingerprintAsBitVect(mol, radius=radius, nBits=nBits)
return np.array(list(map(int, list(fp.ToBitString()))))
def vec_cpd_lst(smi_lst):
smi_set = list(set(smi_lst))
smi_vec_map = {}
for smi in tqdm(smi_set):
mol = Chem.MolFromSmiles(smi)
smi_vec_map[smi] = mfgen(mol)
smi_vec_map[''] = np.zeros(2048)
vec_lst = [smi_vec_map[smi] for smi in smi_lst]
return np.array(vec_lst)
## 向量化
dataset_dir = './dataset' # 更改为你的数据集目录
train_df = pd.read_csv(f'{dataset_dir}/round1_train_data.csv')
test_df = pd.read_csv(f'{dataset_dir}/round1_test_data.csv')
print(f'训练集大小: {len(train_df)}, 测试集大小: {len(test_df)}')
train_rct1_smi = train_df['Reactant1'].to_list()
train_rct2_smi = train_df['Reactant2'].to_list()
train_add_smi = train_df['Additive'].to_list()
train_sol_smi = train_df['Solvent'].to_list()
train_rct1_fp = vec_cpd_lst(train_rct1_smi)
train_rct2_fp = vec_cpd_lst(train_rct2_smi)
train_add_fp = vec_cpd_lst(train_add_smi)
train_sol_fp = vec_cpd_lst(train_sol_smi)
train_x = np.concatenate([train_rct1_fp, train_rct2_fp, train_add_fp, train_sol_fp], axis=1)
train_y = train_df['Yield'].to_numpy()
test_rct1_smi = test_df['Reactant1'].to_list()
test_rct2_smi = test_df['Reactant2'].to_list()
test_add_smi = test_df['Additive'].to_list()
test_sol_smi = test_df['Solvent'].to_list()
test_rct1_fp = vec_cpd_lst(test_rct1_smi)
test_rct2_fp = vec_cpd_lst(test_rct2_smi)
test_add_fp = vec_cpd_lst(test_add_smi)
test_sol_fp = vec_cpd_lst(test_sol_smi)
test_x = np.concatenate([test_rct1_fp, test_rct2_fp, test_add_fp, test_sol_fp], axis=1)
## 模型拟合与保存
# 模型拟合
model = lgb.LGBMRegressor(n_estimators=100, max_depth=10, learning_rate=0.1, num_leaves=31)
model.fit(train_x, train_y)
# 保存模型到文件
with open('./lightgbm_model.pkl', 'wb') as file:
pickle.dump(model, file)
# 从文件加载模型
with open('lightgbm_model.pkl', 'rb') as file:
loaded_model = pickle.load(file)
# 推理
test_pred = loaded_model.predict(test_x)
## 生成答案
ans_str_lst = ['rxnid,Yield']
for idx, y in enumerate(test_pred):
ans_str_lst.append(f'test{idx+1},{y:.4f}')
with open('./submit.txt', 'w') as fw:
fw.writelines('\n'.join(ans_str_lst))
三、Task2:使用RNN网络建模SMILES序列
1)简单介绍RNN模型:
1.RNN的架构示意图:
其中,每一层相当于做了一次线性变换:
以下是RNN的一些主要特点:
循环结构:允许信息在时间上被传播,即同一组权重可以在序列的不同位置重复使用。隐藏状态:代表了网络的记忆,可以捕获序列中先前元素的信息。序列输入和输出:RNN可以处理可变长度的输入序列,并产生可变长度的输出序列。
但是RNN也有缺点:如果序列太长,那么两个相距比较远的字符之间的联系需要通过多个隐藏向量。这就像人和人之间传话一样,传递的人多了,很容易导致信息的损失或者扭曲。因此,它对长序列的记忆能力较弱。
同时,RNN需要一层一层地传递,所以并行能力差,比较容易出现梯度消失或梯度爆炸问题。
2)导入基本包,定义RNN模型
import time
import pandas as pd
from typing import List, Tuple
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import Dataset, DataLoader, Subset
# 定义RNN模型
class RNNModel(nn.Module):
def __init__(self, num_embed, input_size, hidden_size, output_size, num_layers, dropout, device):
super(RNNModel, self).__init__()
self.embed = nn.Embedding(num_embed, input_size)
self.rnn = nn.RNN(input_size, hidden_size, num_layers=num_layers,
batch_first=True, dropout=dropout, bidirectional=True)
self.fc = nn.Sequential(nn.Linear(2 * num_layers * hidden_size, output_size),
nn.Sigmoid(),
nn.Linear(output_size, 1),
nn.Sigmoid())
def forward(self, x):
# x : [bs, seq_len]
x = self.embed(x)
# x : [bs, seq_len, input_size]
_, hn = self.rnn(x) # hn : [2*num_layers, bs, h_dim]
hn = hn.transpose(0,1)
z = hn.reshape(hn.shape[0], -1) # z shape: [bs, 2*num_layers*h_dim]
output = self.fc(z).squeeze(-1) # output shape: [bs, 1]
return output
3)定义数据处理函数,以及tokenizer
tokenizer,鉴于SMILES的特性,这里需要自己定义tokenizer和vocab
这里直接将smiles str按字符拆分,并替换为词汇表中的序号
# 数据处理部分
class Smiles_tokenizer():
def __init__(self, pad_token, regex, vocab_file, max_length):
self.pad_token = pad_token
self.regex = regex
self.vocab_file = vocab_file
self.max_length = max_length
with open(self.vocab_file, "r") as f:
lines = f.readlines()
lines = [line.strip("\n") for line in lines]
vocab_dic = {}
for index, token in enumerate(lines):
vocab_dic[token] = index
self.vocab_dic = vocab_dic
def _regex_match(self, smiles):
regex_string = r"(" + self.regex + r"|"
regex_string += r".)"
prog = re.compile(regex_string)
tokenised = []
for smi in smiles:
tokens = prog.findall(smi)
if len(tokens) > self.max_length:
tokens = tokens[:self.max_length]
tokenised.append(tokens) # 返回一个所有的字符串列表
return tokenised
def tokenize(self, smiles):
tokens = self._regex_match(smiles)
# 添加上表示开始和结束的token:<cls>, <end>
tokens = [["<CLS>"] + token + ["<SEP>"] for token in tokens]
tokens = self._pad_seqs(tokens, self.pad_token)
token_idx = self._pad_token_to_idx(tokens)
return tokens, token_idx
def _pad_seqs(self, seqs, pad_token):
pad_length = max([len(seq) for seq in seqs])
padded = [seq + ([pad_token] * (pad_length - len(seq))) for seq in seqs]
return padded
def _pad_token_to_idx(self, tokens):
idx_list = []
for token in tokens:
tokens_idx = []
for i in token:
if i in self.vocab_dic.keys():
tokens_idx.append(self.vocab_dic[i])
else:
self.vocab_dic[i] = max(self.vocab_dic.values()) + 1
tokens_idx.append(self.vocab_dic[i])
idx_list.append(tokens_idx)
return idx_list
4)读取并处理数据
import matplotlib.pyplot as plt
# 读数据并处理
def read_data(file_path, train=True):
df = pd.read_csv(file_path)
reactant1 = df["Reactant1"].tolist()
reactant2 = df["Reactant2"].tolist()
product = df["Product"].tolist()
additive = df["Additive"].tolist()
solvent = df["Solvent"].tolist()
if train:
react_yield = df["Yield"].tolist()
else:
react_yield = [0 for i in range(len(reactant1))]
# 将reactant拼到一起,之间用.分开。product也拼到一起,用>分开
input_data_list = []
for react1, react2, prod, addi, sol in zip(reactant1, reactant2, product, additive, solvent):
input_info = ".".join([react1, react2])
input_info = ">".join([input_info, prod])
input_data_list.append(input_info)
output = [(react, y) for react, y in zip(input_data_list, react_yield)]
# 统计seq length,序列的长度是一个重要的参考,可以使用下面的代码统计查看以下序列长度的分布
seq_length = [len(i[0]) for i in output]
seq_length_400 = [len(i[0]) for i in output if len(i[0])>200]
print(len(seq_length_400) / len(seq_length))
seq_length.sort(reverse=True)
plt.plot(range(len(seq_length)), seq_length)
plt.title("templates frequence")
plt.show()
return output
class ReactionDataset(Dataset):
def __init__(self, data: List[Tuple[List[str], float]]):
self.data = data
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
return self.data[idx]
def collate_fn(batch):
REGEX = r"\[[^\]]+]|Br?|Cl?|N|O|S|P|F|I|b|c|n|o|s|p|\(|\)|\.|=|#|-|\+|\\\\|\/|:|~|@|\?|>|\*|\$|\%[0-9]{2}|[0-9]"
tokenizer = Smiles_tokenizer("<PAD>", REGEX, "../vocab_full.txt", max_length=300)
smi_list = []
yield_list = []
for i in batch:
smi_list.append(i[0])
yield_list.append(i[1])
tokenizer_batch = torch.tensor(tokenizer.tokenize(smi_list)[1])
yield_list = torch.tensor(yield_list)
return tokenizer_batch, yield_list
5)生成结果文件
<code># 生成结果文件
def predicit_and_make_submit_file(model_file, output_file):
NUM_EMBED = 294
INPUT_SIZE = 300
HIDDEN_SIZE = 512
OUTPUT_SIZE = 512
NUM_LAYERS = 10
DROPOUT = 0.2
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
test_data = read_data('C:/Users/15871/Desktop/yield/round1_test_data.csv', train=False)
test_dataset = ReactionDataset(test_data)
test_loader = DataLoader(test_dataset, batch_size=64, shuffle=False, collate_fn=collate_fn)
model = RNNModel(NUM_EMBED, INPUT_SIZE, HIDDEN_SIZE, OUTPUT_SIZE, NUM_LAYERS, DROPOUT, device).to(device)
# 加载最佳模型
model.load_state_dict(torch.load(model_file))
model.eval()
output_list = []
for i, (src, y) in enumerate(test_loader):
src, y = src.to(device), y.to(device)
with torch.no_grad():
output = model(src)
output_list += output.detach().tolist()
ans_str_lst = ['rxnid,Yield']
for idx,y in enumerate(output_list):
ans_str_lst.append(f'test{idx+1},{y:.4f}')
with open(output_file,'w') as fw:
fw.writelines('\n'.join(ans_str_lst))
print("done!!!")
predicit_and_make_submit_file("../model/RNN.pth",
"../output/RNN_submit.txt")
整段代码跑了一个多小时,返回的分数还更低了,但还是能比较好的理解RNN模型的建立与使用
四、继续尝试使用lightgbm模型预测
Task1学习的时候我就尝试过了lightgbm模型,效果还不错,通过参数的修改,因该还能有更好的预测效果。
1)lightgbm模型调参
以下是部分代码展示:
<code>clf_name == "lgb":
train_matrix = lgb.Dataset(trn_x, label=trn_y)
valid_matrix = lgb.Dataset(val_x, label=val_y)
params = {
'boosting_type': 'gbdt',
'objective': 'regression',
'metric': 'mse',
'max_depth' : 10,
'min_child_weight': 1.0,
'num_leaves': ,
'lambda_l2': 12,
'feature_fraction': 0.8,
'bagging_fraction': 0.8,
'bagging_freq': 1,
'learning_rate': 0.01,
'seed': 2024,
'nthread' : 16,
'verbose' : -1,
model = lgb.train(params, train_matrix, 5000, valid_sets=[train_matrix, valid_matrix],
categorical_feature=[], callbacks=[lgb.early_stopping(500), lgb.log_evaluation(500)])
val_pred = model.predict(val_x, num_iteration=model.best_iteration)
test_pred = model.predict(test_x, num_iteration=model.best_iteration)
oof[valid_index] = val_pred
test_predict += test_pred / folds
cv_score = mean_squared_error(val_y, val_pred)
cv_scores.append(cv_score)
return oof, test_predict
# 使用交叉验证训练模型并预测
oof, test_pred = cv_model(lgb, train_x, train_y, test_x, "lgb")
# 生成提交答案
ans_str_lst = ['rxnid,Yield']
for idx, y in enumerate(test_pred):
ans_str_lst.append(f'test{idx+1},{y:.4f}')
with open('./submit.txt', 'w') as fw:
fw.writelines('\n'.join(ans_str_lst))
2)K折验证
简单介绍
K折交叉验证(K-fold cross-validation)是一种常用的模型评估技术,用于评估机器学习模型在训练数据上的性能。它将原始数据集分成K个子集,称为折(fold)。然后,模型在K个不同的训练集上进行K次训练和验证,在每次训练中,其中一个折被作为验证集,而剩下的K-1个折被用作训练集。这样就产生了K个模型和对应的K个验证分数。
K折交叉验证的步骤如下:
1、将原始数据集分成K个子集。
2、对于每个子集i,将其作为验证集,其余的K-1个子集作为训练集。
3、训练模型,并在验证集上进行评估,记录评估指标(如准确率、精确度、召回率等)。
4、重复步骤2和3,直到每个子集都充当一次验证集,从而产生了K个评估指标。
5、对K个评估指标取平均值作为最终的模型性能评估指标
代码实现
在模型调参后增加K折验证,分数再进一步
<code>#K折验证+lightgbm模型调参
def cv_model(clf, train_x, train_y, test_x, clf_name, seed=2024):
folds = 10
kf = KFold(n_splits=folds, shuffle=True, random_state=seed)
oof = np.zeros(train_x.shape[0])
test_predict = np.zeros(test_x.shape[0])
cv_scores = []
for i, (train_index, valid_index) in enumerate(kf.split(train_x, train_y)):
print('************************************ {} ************************************'.format(str(i+1)))
trn_x, trn_y, val_x, val_y = train_x[train_index], train_y[train_index], train_x[valid_index], train_y[valid_index]
if clf_name == "lgb":
train_matrix = lgb.Dataset(trn_x, label=trn_y)
valid_matrix = lgb.Dataset(val_x, label=val_y)
params = {
'boosting_type': 'gbdt',
'objective': 'regression',
'metric': 'mse',
'max_depth' : 10,
'min_child_weight': 1.0,
'num_leaves': ,
'lambda_l2': 12,
'feature_fraction': 0.8,
'bagging_fraction': 0.8,
'bagging_freq': 1,
'learning_rate': 0.01,
'seed': 2024,
'nthread' : 16,
'verbose' : -1,
}
model = lgb.train(params, train_matrix, 50000, valid_sets=[train_matrix, valid_matrix],
categorical_feature=[], callbacks=[lgb.early_stopping(500), lgb.log_evaluation(500)])
val_pred = model.predict(val_x, num_iteration=model.best_iteration)
test_pred = model.predict(test_x, num_iteration=model.best_iteration)
oof[valid_index] = val_pred
test_predict += test_pred / folds
cv_score = mean_squared_error(val_y, val_pred)
cv_scores.append(cv_score)
print(f"平均验证集MSE: {np.mean(cv_scores):.4f}, 标准差: {np.std(cv_scores):.4f}")
return oof, test_predict
# 使用交叉验证训练模型并预测
oof, test_pred = cv_model(lgb, train_x, train_y, test_x, "lgb")
# 生成提交答案
ans_str_lst = ['rxnid,Yield']
for idx, y in enumerate(test_pred):
ans_str_lst.append(f'test{idx+1},{y:.4f}')
with open('./submit.txt', 'w') as fw:
fw.writelines('\n'.join(ans_str_lst))
3)网格搜索,交叉验证
网格搜索和交叉验证是机器学习中常用的模型评估和参数调优技术:
网格搜索(Grid Search)是一种穷举法,它遍历预先指定的一系列参数组合(形成一个网格)。对于每个参数组合,网格搜索都会训练一个模型并计算其性能指标(如准确率、AUC等)。最终,网格搜索会返回最佳参数组合,即在给定指标上表现最优的那个。网格搜索适合于参数较少的情况,但如果参数空间很大,可能会消耗大量时间和资源。
交叉验证(Cross-validation)则是为了评估模型性能而设计的一种方法。它将原始数据划分为K个互不重叠的子集(称为折),然后进行多次训练和测试过程。每次选取其中一个子集作为验证集,其余作为训练集,重复这个过程直到每个子集都被用作验证集一次。最后,将所有验证结果平均得到模型的泛化能力估计。交叉验证有助于避免因随机划分数据而导致的偏差,特别是当数据集较小的时候,能更好地估计模型的真实性能。
代码实现
<code># 设置网格搜索的参数空间
param_grid = {
'feature_fraction': [0.6,0.8,1.0],
'bagging_fraction':[0.6,0.8,1.0],
'bagging_freq': [4,6,8],
'learning_rate': [0.01,0.05,0.10],
}
# 执行网格搜索
lgb_estimator = lgb.LGBMRegressor(boosting_type='gbdt', objective='regression', metric='mse', seed=2024, nthread=16, verbose=-1)code>
grid_search = GridSearchCV(estimator=lgb_estimator, param_grid=param_grid, scoring='neg_mean_squared_error', cv=3)code>
grid_search.fit(train_x, train_y)
print("Best Parameters:", grid_search.best_params_)
print("Best CV Score:", -grid_search.best_score_)
# 使用最佳参数训练模型并保存
best_params = grid_search.best_params_
model = lgb.LGBMRegressor(**best_params, boosting_type='gbdt', objective='regression', metric='mse', seed=2024, nthread=16, verbose=-1)code>
model.fit(train_x, train_y)
# 使用模型进行预测
test_pred = loaded_model.predict(test_x)
# 生成提交答案
ans_str_lst = ['rxnid,Yield']
for idx, y in enumerate(test_pred):
ans_str_lst.append(f'test{idx+1},{y:.4f}')
with open('./submit.txt', 'w') as fw:
fw.writelines('\n'.join(ans_str_lst))
Best Parameters: {'bagging_fraction': 0.8, 'bagging_freq': 8, 'feature_fraction': 0.6, 'learning_rate': 0.1}
Best CV Score: 0.048864451091337045
4)尝试使用贝叶斯搜索
原理:
贝叶斯搜索的主要思想是,在给定优化的目标函数(广义的函数,只需指定输入和输出即可,无需知道内部结构以及数学性质)的情况下,通过不断地添加样本点来更新目标函数的后验分布(通常是高斯过程),直到后验分布基本贴合于真实分布。每一次添加样本点都会考虑上一次参数的信息,以便更好地调整当前的参数。贝叶斯搜索的核心过程包括先验函数(Prior Function, PF)与采集函数(Acquisition Function, AC)。PF主要利用高斯过程回归(或其他PF函数)来建模目标函数的分布,而AC则用于确定下一个采样点,以最大化信息增益或最小化预期损失。
步骤:
定义先验分布: 基于问题的特性和经验知识,为超参数定义先验分布。
采集函数与样本选择: 使用采集函数(如EI、PI、UCB等)来评估不同超参数组合的潜在价值,并选择下一个要评估的样本点。
模型评估与更新: 在选定的超参数组合下训练模型,并在验证集上评估其性能。然后,使用这些评估结果来更新目标函数的后验分布。
迭代优化: 重复步骤2和3,直到满足停止条件(如达到最大迭代次数、性能提升不再显著等)。
选择最优参数: 从更新后的后验分布中选择性能最优的超参数组合。
代码实现
<code>from skopt import BayesSearchCV
from skopt.space import Real, Categorical, Integer
# 设置BayesSearch的参数空间
search_spaces = {
'max_depth' : (1 , 10),
'min_child_weight': (0.1,10),
'num_leaves': (2**4,2**6),
'lambda_l2': (1,10),
'feature_fraction': (0.1,1),
'bagging_fraction': (0.1,1),
'bagging_freq': (1,8),
'learning_rate': (0.01 ,0.1)
}
#执行bayes搜索
bayes_estimator = lgb.LGBMRegressor(boosting_type='gbdt', objective='regression', metric='mse', seed=1, nthread=16, verbose=-1)code>
n_iter_search = 20
bayes_search = BayesSearchCV(
estimator=bayes_estimator,
search_spaces=search_spaces,
n_iter=n_iter_search,
cv=5,
verbose=3
)
bayes_search.fit(train_x, train_y)
print("Best Parameters:", bayes_search.best_params_)
print("Best CV Score:", -bayes_search.best_score_)
Best CV Score: -0.17849290476807234
没想到第一次尝试就获得了不错的分数 。
五、Task3:transformer模型的建立与使用
1)transformer模型的简单介绍
Transformer模型相比于其他传统的自然语言处理(NLP)模型,如循环神经网络(RNNs)和长短时记忆网络(LSTMs),具有多个显著的优势。以下是Transformer的一些主要优点:
1. **全局信息关系建模**:
- Transformer通过自注意力机制能够捕捉输入序列中各个位置之间的依赖关系,这使得模型能够更好地理解和建模长距离依赖关系。这对于需要理解全局上下文的任务特别有用,比如机器翻译和文本摘要。
2. **高效并行计算**:
- Transformer模型中的自注意力层可以并行计算,这意味着它可以在GPU等并行计算设备上高效运行。相比之下,RNNs需要按序列顺序处理每个元素,这限制了它们的并行化程度。
3. **无序列依赖性**:
- Transformer模型处理输入序列中的所有位置可以同时进行,不受序列中前一项的影响。这不仅加快了训练过程,也使得模型可以更快地进行推理。
4. **捕捉长距离依赖**:
- Transformer能够更好地捕捉间隔较长的语义关联,这对于分析预测更长的文本非常重要。
5. **计算效率**:
- Transformer的计算复杂度对于序列长度来说是固定的,这使得它在处理非常长的序列时比卷积神经网络(CNNs)更为高效。
综上所述,Transformer模型因其高效的并行计算能力、强大的表示能力和对长距离依赖的有效处理,已成为现代NLP领域的主流模型之一。这些优势使其在实际应用中表现优异,特别是在那些需要处理大量数据和长文本序列的任务中。
2)代码实现
<code>!pip install pandas
import pandas as pd
from torch.utils.data import Dataset, DataLoader, Subset
from typing import List, Tuple
import re
import torch
import torch.nn as nn
import time
import torch.optim as optim
class Smiles_tokenizer():
def __init__(self, pad_token, regex, vocab_file, max_length):
self.pad_token = pad_token
self.regex = regex
self.vocab_file = vocab_file
self.max_length = max_length
with open(self.vocab_file, "r") as f:
lines = f.readlines()
lines = [line.strip("\n") for line in lines]
vocab_dic = {}
for index, token in enumerate(lines):
vocab_dic[token] = index
self.vocab_dic = vocab_dic
def _regex_match(self, smiles):
regex_string = r"(" + self.regex + r"|"
regex_string += r".)"
prog = re.compile(regex_string)
tokenised = []
for smi in smiles:
tokens = prog.findall(smi)
if len(tokens) > self.max_length:
tokens = tokens[:self.max_length]
tokenised.append(tokens) # 返回一个所有的字符串列表
return tokenised
def tokenize(self, smiles):
tokens = self._regex_match(smiles)
# 添加上表示开始和结束的token:<cls>, <end>
tokens = [["<CLS>"] + token + ["<SEP>"] for token in tokens]
tokens = self._pad_seqs(tokens, self.pad_token)
token_idx = self._pad_token_to_idx(tokens)
return tokens, token_idx
def _pad_seqs(self, seqs, pad_token):
pad_length = max([len(seq) for seq in seqs])
padded = [seq + ([pad_token] * (pad_length - len(seq))) for seq in seqs]
return padded
def _pad_token_to_idx(self, tokens):
idx_list = []
new_vocab = []
for token in tokens:
tokens_idx = []
for i in token:
if i in self.vocab_dic.keys():
tokens_idx.append(self.vocab_dic[i])
else:
new_vocab.append(i)
self.vocab_dic[i] = max(self.vocab_dic.values()) + 1
tokens_idx.append(self.vocab_dic[i])
idx_list.append(tokens_idx)
with open("../new_vocab_list.txt", "a") as f:
for i in new_vocab:
f.write(i)
f.write("\n")
return idx_list
def _save_vocab(self, vocab_path):
with open(vocab_path, "w") as f:
for i in self.vocab_dic.keys():
f.write(i)
f.write("\n")
print("update new vocab!")
def read_data(file_path, train=True):
df = pd.read_csv(file_path)
reactant1 = df["Reactant1"].tolist()
reactant2 = df["Reactant2"].tolist()
product = df["Product"].tolist()
additive = df["Additive"].tolist()
solvent = df["Solvent"].tolist()
if train:
react_yield = df["Yield"].tolist()
else:
react_yield = [0 for i in range(len(reactant1))]
# 将reactant\additive\solvent拼到一起,之间用.分开。product也拼到一起,用>>分开
input_data_list = []
for react1, react2, prod, addi, sol in zip(reactant1, reactant2, product, additive, solvent):
# input_info = ".".join([react1, react2, addi, sol])
input_info = ".".join([react1, react2])
input_info = ">".join([input_info, prod])
input_data_list.append(input_info)
output = [(react, y) for react, y in zip(input_data_list, react_yield)]
return output
class ReactionDataset(Dataset):
def __init__(self, data: List[Tuple[List[str], float]]):
self.data = data
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
return self.data[idx]
def collate_fn(batch):
REGEX = r"\[[^\]]+]|Br?|Cl?|N|O|S|P|F|I|b|c|n|o|s|p|\(|\)|\.|=|#|-|\+|\\\\|\/|:|~|@|\?|>|\*|\$|\%[0-9]{2}|[0-9]"
tokenizer = Smiles_tokenizer("<PAD>", REGEX, "../vocab_full.txt", 300)
smi_list = []
yield_list = []
for i in batch:
smi_list.append(i[0])
yield_list.append(i[1])
tokenizer_batch = torch.tensor(tokenizer.tokenize(smi_list)[1])
yield_list = torch.tensor(yield_list)
return tokenizer_batch, yield_list
class TransformerEncoderModel(nn.Module):
def __init__(self, input_dim, d_model, num_heads, fnn_dim, num_layers, dropout):
super().__init__()
self.embedding = nn.Embedding(input_dim, d_model)
self.layerNorm = nn.LayerNorm(d_model)
self.encoder_layer = nn.TransformerEncoderLayer(d_model=d_model,
nhead=num_heads,
dim_feedforward=fnn_dim,
dropout=dropout,
batch_first=True,
norm_first=True # pre-layernorm
)
self.transformer_encoder = nn.TransformerEncoder(self.encoder_layer,
num_layers=num_layers,
norm=self.layerNorm)
self.dropout = nn.Dropout(dropout)
self.lc = nn.Sequential(nn.Linear(d_model, 256),
nn.Sigmoid(),
nn.Linear(256, 96),
nn.Sigmoid(),
nn.Linear(96, 1))
def forward(self, src):
# src shape: [batch_size, src_len]
embedded = self.dropout(self.embedding(src))
# embedded shape: [batch_size, src_len, d_model]
outputs = self.transformer_encoder(embedded)
# outputs shape: [batch_size, src_len, d_model]
# fisrt
z = outputs[:,0,:]
# z = torch.sum(outputs, dim=1)
# print(z)
# z shape: [bs, d_model]
outputs = self.lc(z)
# print(outputs)
# outputs shape: [bs, 1]
return outputs.squeeze(-1)
def adjust_learning_rate(optimizer, epoch, start_lr):
"""Sets the learning rate to the initial LR decayed by 10 every 30 epochs"""
lr = start_lr * (0.1 ** (epoch // 3))
for param_group in optimizer.param_groups:
param_group['lr'] = lr
def train():
# 超参数
N = 10
INPUT_DIM = 292
D_MODEL = 512
NUM_HEADS = 4
FNN_DIM = 1024
NUM_LAYERS = 4
DROPOUT = 0.2
CLIP = 1
N_EPOCHS = 40
LR = 1e-4
start_time = time.time()
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# device = 'cpu'
data = read_data("../dataset/round1_train_data.csv")
dataset = ReactionDataset(data)
subset_indices = list(range(N))
subset_dataset = Subset(dataset, subset_indices)
train_loader = DataLoader(dataset, batch_size=128, shuffle=True, collate_fn=collate_fn)
model = TransformerEncoderModel(INPUT_DIM, D_MODEL, NUM_HEADS, FNN_DIM, NUM_LAYERS, DROPOUT) # 创建Transformer模型
model = model.to(device)
model.train() # 设置模型为训练模式
optimizer = optim.AdamW(model.parameters(), lr=LR, weight_decay=0.01) # 创建优化器
scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.1, patience=10) # 创建学习率调整器code>
criterion = nn.MSELoss() # 创建损失函数
best_valid_loss = 10 # 初始化最佳验证损失
for epoch in range(N_EPOCHS):
epoch_loss = 0
# adjust_learning_rate(optimizer, epoch, LR) # 动态调整学习率(注释掉的部分表示未使用动态调整学习率的功能)
for i, (src, y) in enumerate(train_loader):
src, y = src.to(device), y.to(device)
optimizer.zero_grad()
output = model(src) # 前向传播计算输出
loss = criterion(output, y) # 计算损失
loss.backward() # 反向传播计算梯度
torch.nn.utils.clip_grad_norm_(model.parameters(), CLIP) # 梯度裁剪
optimizer.step()
epoch_loss += loss.detach().item()
if i % 50 == 0: # 打印
print(f'Step: {i} | Train Loss: {epoch_loss:.4f}')
loss_in_a_epoch = epoch_loss / len(train_loader) # 计算当前轮次的平均损失
scheduler.step(loss_in_a_epoch) # 根据平均损失调整学习率
print(f'Epoch: {epoch+1:02} | Train Loss: {loss_in_a_epoch:.3f}') # 打印当前轮次的训练损失
if loss_in_a_epoch < best_valid_loss: # 如果当前轮次的损失小于最佳验证损失
best_valid_loss = loss_in_a_epoch # 更新最佳验证损失
# 在训练循环结束后保存模型
torch.save(model.state_dict(), '../model/transformer.pth')
end_time = time.time()
# 计算并打印运行时间
elapsed_time_minute = (end_time - start_time)/60
print(f"Total running time: {elapsed_time_minute:.2f} minutes")
if __name__ == '__main__':
train()
# 生成结果文件
def predicit_and_make_submit_file(model_file, output_file):
INPUT_DIM = 292 # src length
D_MODEL = 512
NUM_HEADS = 4
FNN_DIM = 1024
NUM_LAYERS = 4
DROPOUT = 0.2
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
test_data = read_data("../dataset/round1_test_data.csv", train=False)
test_dataset = ReactionDataset(test_data)
test_loader = DataLoader(test_dataset, batch_size=128, shuffle=False, collate_fn=collate_fn)
model = TransformerEncoderModel(INPUT_DIM, D_MODEL, NUM_HEADS, FNN_DIM, NUM_LAYERS, DROPOUT).to(device)
# 加载最佳模型
model.load_state_dict(torch.load(model_file))
model.eval()
output_list = []
for i, (src, y) in enumerate(test_loader):
src = src.to(device)
with torch.no_grad():
output = model(src)
output_list += output.detach().tolist()
ans_str_lst = ['rxnid,Yield']
for idx,y in enumerate(output_list):
ans_str_lst.append(f'test{idx+1},{y:.4f}')
with open(output_file,'w') as fw:
fw.writelines('\n'.join(ans_str_lst))
predicit_and_make_submit_file("../model/transformer.pth",
"../output/result.txt")
3)特征工程:增加特征,特征降维
transformer模型效果会比RNN模型好很多,但分数还是不如机械学习。
接下来我会继续尝试特征选择和特征降维,增加特征。
增加特征
#计算高级分子性质
def calc_advanced_properties(mol):
mol_vol = Descriptors3D.Volume(mol) # 计算分子体积
polar_surface_area = Descriptors.TPSA(mol) # 总表面酸度
return [mol_vol, polar_surface_area]
# 生成Morgan指纹
def mfgen(mol, nBits=2048, radius=2):
fp = rdMolDescriptors.GetMorganFingerprintAsBitVect(mol, radius=radius, nBits=nBits)
return np.array(list(map(int, list(fp.ToBitString()))))
# 计算基本分子性质
def calc_mol_properties(mol):
mol_wt = Descriptors.MolWt(mol) # 分子量
logp = Descriptors.MolLogP(mol) # 分配系数对数值
num_h_donors = Descriptors.NumHDonors(mol) # 氢供体数
num_h_acceptors = Descriptors.NumHAcceptors(mol) # 氢受体数
num_aromatic_rings = Descriptors.NumAromaticRings(mol) # 计算分子中的芳香环数量
num_aliphatic_rings = Descriptors.NumAliphaticRings(mol) # 计算分子中的脂环数量
num_saturated_rings = Descriptors.NumSaturatedRings(mol) # 计算分子中的饱和环数量
num_heteroatoms = Descriptors.NumHeteroatoms(mol) # 计算分子中的杂原子数量
qed = QED.qed(mol) # 计算分子的 QED 值
return [mol_wt, logp, num_h_donors, num_h_acceptors,
num_aromatic_rings, num_aliphatic_rings, num_saturated_rings, num_heteroatoms, qed]
# 计算扩展分子性质
def calc_extended_properties(mol):
mol_wt = Descriptors.MolWt(mol) # 分子量
logp = Descriptors.MolLogP(mol) # 分配系数对数值
tpsa = Descriptors.TPSA(mol) # 总表面酸度
return [mol_wt, logp, tpsa]
# 计算更多分子性质
def calc_more_properties(mol):
qed = QED.qed(mol) # 药物相似性评分
sa_score = Crippen.MolMR(mol) # Crippen's Molar Refractivity
rot_bonds = Lipinski.NumRotatableBonds(mol) # 可旋转键数量
return [qed, sa_score, rot_bonds]
# 计算拓扑结构性质
def calc_topological_properties(mol):
return [Descriptors.NumRotatableBonds(mol), Descriptors.NumHeteroatoms(mol)]
# 清洗SMILES字符串
def clean_smiles(smi_list):
return [smi for smi in smi_list if Chem.MolFromSmiles(smi) is not None]
# 将化合物列表向量化
def vec_cpd_lst(smi_lst):
smi_lst = clean_smiles(smi_lst) # 清洗SMILES
smi_set = list(set(smi_lst)) # 获取唯一的SMILES字符串集合
smi_vec_map = {} # 用于存储指纹的字典
smi_prop_map = {} # 用于存储性质的字典
for smi in tqdm(smi_set): # 使用进度条
mol = Chem.MolFromSmiles(smi) # 从SMILES字符串创建分子对象
if mol is not None:
smi_vec_map[smi] = mfgen(mol) # 生成Morgan指纹
smi_prop_map[smi] = calc_extended_properties(mol) + calc_more_properties(mol) # 计算扩展性质和更多性质
else:
smi_vec_map[smi] = np.zeros(2048) # 无效SMILES设置默认指纹
smi_prop_map[smi] = [0, 0, 0, 0, 0, 0] # 无效SMILES设置默认性质
# 根据原始SMILES列表生成指纹和性质列表
vec_lst = [smi_vec_map[smi] for smi in smi_lst]
prop_lst = [smi_prop_map[smi] for smi in smi_lst]
return np.array(vec_lst), np.array(prop_lst) # 返回指纹和性质的numpy数组
# 读取训练集和测试集数据
train_df = pd.read_csv('C:/Users/15871/Desktop/yield/round1_train_data.csv')
test_df = pd.read_csv('C:/Users/15871/Desktop/yield/round1_test_data.csv')
print(f'训练集大小: {len(train_df)}, 测试集大小: {len(test_df)}')
# 读取训练集中的SMILES字符串
train_rct1_smi = train_df['Reactant1'].to_list()
train_rct2_smi = train_df['Reactant2'].to_list()
train_add_smi = train_df['Additive'].to_list()
train_sol_smi = train_df['Solvent'].to_list()
train_prod_smi = train_df['Product'].to_list()
# 生成指纹特征和性质特征
train_rct1_fp, train_rct1_prop = vec_cpd_lst(train_rct1_smi)
train_rct2_fp, train_rct2_prop = vec_cpd_lst(train_rct2_smi)
train_add_fp, train_add_prop = vec_cpd_lst(train_add_smi)
train_sol_fp, train_sol_prop = vec_cpd_lst(train_sol_smi)
train_prod_fp, train_prod_prop = vec_cpd_lst(train_prod_smi)
# 合并特征
train_x = np.concatenate([
train_rct1_fp, train_rct2_fp, train_add_fp, train_sol_fp, train_prod_fp,
train_rct1_prop, train_rct2_prop, train_add_prop, train_sol_prop, train_prod_prop], axis=1)
train_y = train_df['Yield'].to_numpy()
# 读取测试集中的SMILES字符串
test_rct1_smi = test_df['Reactant1'].to_list()
test_rct2_smi = test_df['Reactant2'].to_list()
test_add_smi = test_df['Additive'].to_list()
test_sol_smi = test_df['Solvent'].to_list()
test_prod_smi = test_df['Product'].to_list()
# 生成测试集的指纹特征和性质特征
test_rct1_fp, test_rct1_prop = vec_cpd_lst(test_rct1_smi)
test_rct2_fp, test_rct2_prop = vec_cpd_lst(test_rct2_smi)
test_add_fp, test_add_prop = vec_cpd_lst(test_add_smi)
test_sol_fp, test_sol_prop = vec_cpd_lst(test_sol_smi)
test_prod_fp, test_prod_prop = vec_cpd_lst(test_prod_smi)
# 合并测试集特征
test_x = np.concatenate([
test_rct1_fp, test_rct2_fp, test_add_fp, test_sol_fp, test_prod_fp,
test_rct1_prop, test_rct2_prop, test_add_prop, test_sol_prop, test_prod_prop], axis=1)
特征降维(pca降维)
from sklearn.decomposition import PCA
import numpy as np
pca = PCA()
pca.fit(train_x)
# 计算累计方差解释比例
cumulative_variance = np.cumsum(pca.explained_variance_ratio_)
# 选择主成分数量以保留99%的累计方差
n_components = np.argmax(cumulative_variance >= 1) + 1
print(f"选择的主成分数量: {n_components}")
# 使用选择的主成分数量进行降维
pca = PCA(n_components=n_components)
train_x_pca = pca.fit_transform(train_x)
test_x_pca = pca.transform(test_x)
# 输出降维后的训练和测试数据形状
print(f"降维后的训练数据形状: {train_x_pca.shape}")
print(f"降维后的测试数据形状: {test_x_pca.shape}")
train_x_pca = np.round(train_x_pca, 4)
test_x_pca = np.round(test_x_pca, 4)
train_x = train_x_combined
test_x = test_x_combined
df_train_x = pd.DataFrame(train_x)
df_train_y = pd.DataFrame(train_y)
df_test_x = pd.DataFrame(test_x)
pd.set_option('display.max_rows', None)
pd.set_option('display.max_columns', None)
4)模型融合:
目前还只是有这个想法,代码实现还有待解决,是一个还不错的思路,可以同时使用lightgbm和xgboost等模型,提高预测的精度。
感谢您看到这里,有写得不对的地方,还请大佬们批评指正。
相关文献:
1.A Structure-Based Platform for Predicting Chemical Reactivity - ScienceDirect
2.Predicting reaction performance in C–N cross-coupling using machine learning | Science
3.On the use of real-world datasets for reaction yield prediction - Chemical Science (RSC Publishing)
4.Prediction of chemical reaction yields using deep learning - IOPscience
5.https://blog.csdn.net/qq_38614074/article/details/136928542
6.【机器学习-08】参数调优宝典:网格搜索与贝叶斯搜索等攻略_网络搜索法进行参数调整-CSDN博客
7.LightGBM算法总结-CSDN博客
声明
本文内容仅代表作者观点,或转载于其他网站,本站不以此文作为商业用途
如有涉及侵权,请联系本站进行删除
转载本站原创文章,请注明来源及作者。