Gitee 推荐 | 基于 MindSpore 的开源 NLP 库 MindNLP

1,133次阅读
没有评论

MindNLP


Gitee 推荐 | 基于 MindSpore 的开源 NLP 库 MindNLP


Gitee 推荐 | 基于 MindSpore 的开源 NLP 库 MindNLP


Gitee 推荐 | 基于 MindSpore 的开源 NLP 库 MindNLP


Gitee 推荐 | 基于 MindSpore 的开源 NLP 库 MindNLP


Gitee 推荐 | 基于 MindSpore 的开源 NLP 库 MindNLP

Introduction |
Quick Links |
Installation |
Get Started |
Tutorials |
Notes

Introduction

MindNLP is an open source NLP library based on MindSpore. It supports a platform for solving natural language processing tasks, containing many common approaches in NLP. It can help researchers and developers to construct and train models more conveniently and rapidly.

The master branch works with MindSpore master.

Major Features

  • Comprehensive data processing: Several classical NLP datasets are packaged into friendly module for easy use, such as Multi30k, SQuAD, CoNLL, etc.
  • Friendly NLP model toolset: MindNLP provides various configurable components. It is friendly to customize models using MindNLP.
  • Easy-to-use engine: MindNLP simplified complicated training process in MindSpore. It supports Trainer and Evaluator interfaces to train and evaluate models easily.

Quick Links

Installation

Dependency

  • mindspore >= 1.8.1

Install from source

To install MindNLP from source, please run:

pip install git+https://github.com/mindspore-ecosystem/mindnlp.git

Get Started

We will next quickly implement a sentiment classification task by using mindnlp.

Define Model

import math
from mindspore import nn
from mindspore import ops
from mindspore.common.initializer import Uniform, HeUniform
from mindnlp.abc import Seq2vecModel

class SentimentClassification(Seq2vecModel):
def construct(self, text):
_, (hidden, _), _ = self.encoder(text)
context = ops.concat((hidden[2, :, :], hidden[1, :, :]), axis=1)
output = self.head(context)
return output

Define Hyperparameters

The following are some of the required hyperparameters in the model training process.

# define Models & Loss & Optimizer
hidden_size = 256
output_size = 1
num_layers = 2
bidirectional = True
drop = 0.5
lr = 0.001

Data Preprocessing

The dataset was downloaded and preprocessed by calling the interface of dataset in mindnlp.

Load dataset:

from mindnlp.dataset import load

imdb_train, imdb_test = load(‘imdb’, shuffle=True)

Initializes the vocab and tokenizer for preprocessing:

from mindnlp.modules import Glove
from mindnlp.transforms import BasicTokenizer

embedding, vocab = Glove.from_pretrained(‘6B’, 100, special_tokens=[“<unk>”, “<pad>”], dropout=drop)
tokenizer = BasicTokenizer(True)

The loaded dataset is preprocessed and divided into training and validation:

from mindnlp.dataset import process

imdb_train = process(‘imdb’, imdb_train, tokenizer=tokenizer, vocab=vocab,
bucket_boundaries=[400, 500], max_len=600, drop_remainder=True)
imdb_test = process(‘imdb’, imdb_test, tokenizer=tokenizer, vocab=vocab,
bucket_boundaries=[400, 500], max_len=600, drop_remainder=False)

Instantiate Model

from mindnlp.modules import RNNEncoder

# build encoder
lstm_layer = nn.LSTM(100, hidden_size, num_layers=num_layers, batch_first=True,
dropout=dropout, bidirectional=bidirectional)
encoder = RNNEncoder(embedding, lstm_layer)

# build head
head = nn.SequentialCell([
nn.Dropout(1 dropout),
nn.Sigmoid(),
nn.Dense(hidden_size * 2, output_size,
weight_init=HeUniform(math.sqrt(5)),
bias_init=Uniform(1 / math.sqrt(hidden_size * 2)))

])

# build network
network = SentimentClassification(encoder, head)
loss = nn.BCELoss(reduction=‘mean’)
optimizer = nn.Adam(network.trainable_params(), learning_rate=lr)

Training Process

Now that we have completed all the preparations, we can begin to train the model.

from mindnlp.engine.metrics import Accuracy
from mindnlp.engine.trainer import Trainer

# define metrics
metric = Accuracy()

# define trainer
trainer = Trainer(network=net, train_dataset=imdb_train, eval_dataset=imdb_valid, metrics=metric,
epochs=5, loss_fn=loss, optimizer=optimizer)
trainer.run(tgt_columns=“label”)
print(“end train”)

License

This project is released under the Apache 2.0 license.

Feedbacks and Contact

The dynamic version is still under development, if you find any issue or have an idea on new features, please don’t hesitate to contact us via Github Issues.

Acknowledgement

MindSpore is an open source project that welcome any contribution and feedback.
We wish that the toolbox and benchmark could serve the growing research
community by providing a flexible as well as standardized toolkit to reimplement existing methods
and develop their own new semantic segmentation methods.

Citation

If you find this project useful in your research, please consider citing:

@misc{mindnlp2022,
title={{MindNLP}: a MindSpore NLP library},
author={MindNLP Contributors},
howpublished = {url{https://github.com/mindlab-ai/mindnlp}},
year={2022}
}

Read More 

正文完
可以使用微信扫码关注公众号(ID:xzluomor)
post-qrcode
 0
评论(没有评论)

文心AIGC

2023 年 3 月
 12345
6789101112
13141516171819
20212223242526
2728293031  
文心AIGC
文心AIGC
人工智能ChatGPT,AIGC指利用人工智能技术来生成内容,其中包括文字、语音、代码、图像、视频、机器人动作等等。被认为是继PGC、UGC之后的新型内容创作方式。AIGC作为元宇宙的新方向,近几年迭代速度呈现指数级爆发,谷歌、Meta、百度等平台型巨头持续布局
文章搜索
热门文章
潞晨尤洋:日常办公没必要上私有模型,这三类企业才需要 | MEET2026

潞晨尤洋:日常办公没必要上私有模型,这三类企业才需要 | MEET2026

潞晨尤洋:日常办公没必要上私有模型,这三类企业才需要 | MEET2026 Jay 2025-12-22 09...
面向「空天具身智能」,北航团队提出星座规划新基准丨NeurIPS’25

面向「空天具身智能」,北航团队提出星座规划新基准丨NeurIPS’25

面向「空天具身智能」,北航团队提出星座规划新基准丨NeurIPS’25 鹭羽 2025-12-13 22:37...
商汤Seko2.0重磅发布,合作短剧登顶抖音AI短剧榜No.1

商汤Seko2.0重磅发布,合作短剧登顶抖音AI短剧榜No.1

商汤Seko2.0重磅发布,合作短剧登顶抖音AI短剧榜No.1 十三 2025-12-15 14:13:14 ...
跳过“逐字生成”!蚂蚁集团赵俊博:扩散模型让我们能直接修改Token | MEET2026

跳过“逐字生成”!蚂蚁集团赵俊博:扩散模型让我们能直接修改Token | MEET2026

跳过“逐字生成”!蚂蚁集团赵俊博:扩散模型让我们能直接修改Token | MEET2026 一水 2025-1...
10亿美元OpenAI股权兑换迪士尼版权!米老鼠救Sora来了

10亿美元OpenAI股权兑换迪士尼版权!米老鼠救Sora来了

10亿美元OpenAI股权兑换迪士尼版权!米老鼠救Sora来了 一水 2025-12-12 13:56:19 ...
最新评论
ufabet ufabet มีเกมให้เลือกเล่นมากมาย: เกมเดิมพันหลากหลาย ครบทุกค่ายดัง
tornado crypto mixer tornado crypto mixer Discover the power of privacy with TornadoCash! Learn how this decentralized mixer ensures your transactions remain confidential.
ดูบอลสด ดูบอลสด Very well presented. Every quote was awesome and thanks for sharing the content. Keep sharing and keep motivating others.
ดูบอลสด ดูบอลสด Pretty! This has been a really wonderful post. Many thanks for providing these details.
ดูบอลสด ดูบอลสด Pretty! This has been a really wonderful post. Many thanks for providing these details.
ดูบอลสด ดูบอลสด Hi there to all, for the reason that I am genuinely keen of reading this website’s post to be updated on a regular basis. It carries pleasant stuff.
Obrazy Sztuka Nowoczesna Obrazy Sztuka Nowoczesna Thank you for this wonderful contribution to the topic. Your ability to explain complex ideas simply is admirable.
ufabet ufabet Hi there to all, for the reason that I am genuinely keen of reading this website’s post to be updated on a regular basis. It carries pleasant stuff.
ufabet ufabet You’re so awesome! I don’t believe I have read a single thing like that before. So great to find someone with some original thoughts on this topic. Really.. thank you for starting this up. This website is something that is needed on the internet, someone with a little originality!
ufabet ufabet Very well presented. Every quote was awesome and thanks for sharing the content. Keep sharing and keep motivating others.
热评文章
跳过“逐字生成”!蚂蚁集团赵俊博:扩散模型让我们能直接修改Token | MEET2026

跳过“逐字生成”!蚂蚁集团赵俊博:扩散模型让我们能直接修改Token | MEET2026

跳过“逐字生成”!蚂蚁集团赵俊博:扩散模型让我们能直接修改Token | MEET2026 一水 2025-1...
10亿美元OpenAI股权兑换迪士尼版权!米老鼠救Sora来了

10亿美元OpenAI股权兑换迪士尼版权!米老鼠救Sora来了

10亿美元OpenAI股权兑换迪士尼版权!米老鼠救Sora来了 一水 2025-12-12 13:56:19 ...
IDC MarketScape: 容联云位居“中国AI赋能的联络中心”领导者类别

IDC MarketScape: 容联云位居“中国AI赋能的联络中心”领导者类别

IDC MarketScape: 容联云位居“中国AI赋能的联络中心”领导者类别 量子位的朋友们 2025-1...