# Config¶

class config.Config[source]

In this class, we set the configuration parameters, adopt C library for data and memory processing. In the following, we train models and test models.

## Getting Statistics of Dataset¶

Config.get_ent_total()[source]

This method gets the entity total of knowledge base.

Config.get_rel_total()[source]

This method gets the relation total of knowledge base.

## Setting Configuration Parameters¶

Config.set_alpha(alpha)[source]

This mothod sets the learning rate for gradient descent.

Parameters: alpha (float) – the learning rate.
Config.set_margin(margin)[source]

This method sets the margin for the widely used pairwise margin-based ranking loss.

Parameters: margin (float) – margin for margin-based ranking function
Config.set_bern(bern)[source]

This method sets the strategy for negative sampling.

Parameters: bern – “bern” or “unif”
Config.set_dimension(dim)[source]

This method sets the entity dimension and relation dimension at the same time.

Parameters: dim (int) – the dimension of entity and relation.
Config.set_ent_dimension(dim)[source]

This method sets the dimension of entity.

Parameters: dim (int) – the dimension of entity.
Config.set_rel_dimension(dim)[source]

This method sets the dimension of relation.

Parameters: dim (int) – the dimension of relation.
Config.set_train_times(times)[source]

This method sets the rounds for training.

Parameters: times (int) – rounds for training.
Config.set_nbatches(nbatches)[source]

This method sets the number of batch.

Parameters: nbatches (int) – number of batch.
Config.set_work_threads(threads)[source]

We can use multi-threading trainning for accelaration. This method sets the numebr of threads.

Config.set_ent_neg_rate(rate)[source]

the number of negatives generated per positive training sample influnces the experiment results. This method sets the number of negative entities constructed per positive sample.

Parameters: rate (int) – the number of negative entities per positive sample.
Config.set_rel_neg_rate(rate)[source]

This method sets the number of negative relations per positive sample.

Parameters: rate (int) – the number of negative relations per positive sample.
Config.set_lr_decay(lr_decay)[source]

This method sets the learning rate decay for Adagrad optim method.

Parameters: lr_decay (float) – learning rate decay
Config.set_weight_decay(weight_decay)[source]

This method sets the weight decay for Adagrad optim method.

Parameters: weight_decay (float) – weight decay for Adagrad.
Config.set_opt_method(method)[source]

This method sets the optimizer for your model.

Parameters: optimizer – SGD Adagrad Adam and Adadelta can be chosen for optimizing.
Config.set_log_on(flag)[source]

This method sets whether to log on the loss value.

Parameters: flag (bool) – if True, logs on the loss value when training.

## Setting Inpath and Outpath¶

Config.set_in_path(path)[source]

This method sets the path of benchmark.

Config.set_out_files(path)[source]

This method sets where to emport embedding matrix.

Config.set_import_files(path)[source]

Model paramters are exported automatically every few rounds. This method sets the path to find exported model parameters.

Parameters: path – path to automatically exported model parameters.
Config.set_export_files(path)[source]

Model parameters will be exported to this path automatically.

Parameters: path – files that model parameters will be exported to.
Config.set_export_steps(steps)[source]

This method sets that every few steps the model paramters will be exported automatically.

Parameters: steps (int) – Models will be exported via torch.save() automatically every few rounds

Config.save_pytorch()[source]

This method saves the model paramters to self.exportName which was set by set_export_files().

Config.restore_pytorch()[source]

This method restore model through torch.load

Config.export_variables(path=None)[source]

This method export model paramters through torch.save.

Parameters: path – If None, this function euquals to save_pytorch(), else save paramters to path
Config.import_variables(path=None)[source]

This method export model paramters through torch.load.

Parameters: path – If None, this function euquals to restore_pytorch(), else save paramters to path
Config.get_parameters(mode='numpy')[source]

This method gets the model paramters.

Parameters: mode – if numpy, returns model parameters as numpy array, if list, returns those as list
Config.save_parameters(path=None)[source]

This method save model parameters as json files when training finished.

Parameters: path – if None, save parameters to self.out_path which was set by set_out_files().

## Setting Models¶

Config.set_model(model)[source]

This method sets the traing model and optimizer method.

Parameters: model – training model. We can choose from :class:models.TransE :class:models.TransH :class:models.TransR :class:models.TransD :class:models.RESCAL :class:models.DistMult and :class:models.ComplEx
Config.sampling()[source]

In this function, we choose positive samples and construct negative samples.

## Training Models¶

Config.run()[source]

In this function, we train the model

## Testing Models¶

Config.set_test_flag(flag)[source]

This method sets whether we test our model.

Parameters: flag (bool) – if True, we test the model.

Note

Note that test_flag must be set after all the other configuration parameters are set.

Config.test()[source]

In this function, we test the model.