Finetune API Reference

Classifier

class finetune.Classifier(**kwargs)[source]

Classifies a single document into 1 of N categories.

Parameters:
  • config – A finetune.config.Settings object or None (for default config).
  • **kwargs – key-value pairs of config items to override.
cached_predict()

Context manager that prevents the recreation of the tensorflow graph on every call to BaseModel.predict().

context_span_to_label_span(batch_context_spans, batch_text_chunks=None)

Copy relevant context spans into each corresponding label span as denoted by batch_text_chunks

create_base_model(filename, exists_ok=False)

Saves the current weights into the correct file format to be used as a base model. :param filename: the path to save the base model relative to finetune’s base model filestore. :param exists_ok: Whether to replace the model if it exists.

featurize(X)[source]

Embeds inputs in learned feature space. Can be called before or after calling finetune().

Parameters:X – list or array of text to embed.
Returns:np.array of features of shape (n_examples, embedding_size).
featurize_sequence(*args, **kwargs)

Base method to get raw token-level features out of the model. These features are the same features that are fed into the target_model.

fill_in_context_gaps(X, context)

Ensure all tokens are covered by context; if they are not, fill in gaps with the default style as given from the user

finetune(X, Y=None, batch_size=None)[source]
Parameters:
  • X – list or array of text.
  • Y – integer or string-valued class labels.
  • batch_size – integer number of examples per batch. When N_GPUS > 1, this number corresponds to the number of training examples provided to each GPU.

Performs grid search over config items defined using “GridSearchable” objects and returns either full results or the config object that relates to the best results. The default config contains grid searchable objects for the most important parameters to search over.

Parameters:
  • Xs – Input text. Either [num_samples] or [sequence, num_samples] for single or multi input models respectively.
  • Y – Targets, A list of targets, [num_samples] that correspond to each sample in Xs.
  • test_size – Int or float. If an int is given this number of samples is used to validate, if a float is given then that fraction of samples is used.
  • eval_fn – An eval function that takes 2 inputs (prediction, truth) and returns a float, with a max value being desired.
  • probs – If true, eval_fn is passed probability outputs from predict_proba, otherwise the output of predict is used.
  • return_all – If True, all results are returned, if False, only the best config is returned.
  • kwargs – Keyword arguments to pass to get_config()
Returns:

default is to return the best config object. If return_all is true, it returns a list of tuples of the form [(config, eval_fn output), … ]

classmethod finetune_grid_search_cv(Xs, Y, *, n_splits, test_size, eval_fn=None, probs=False, return_all=False, **kwargs)

Performs cross validated grid search over config items defined using “GridSearchable” objects and returns either full results or the config object that relates to the best results. The default config contains grid searchable objects for the most important parameters to search over.

It should be noted that the cv splits are not guaranteed unique, but each split is given to each set of hparams.

Parameters:
  • Xs – Input text. Either [num_samples] or [sequence, num_samples] for single or multi input models respectively.
  • Y – Targets, A list of targets, [num_samples] that correspond to each sample in Xs.
  • n_splits – Number of CV splits to do.
  • test_size – Int or float. If an int is given this number of samples is used to validate, if a float is given then that fraction of samples is used.
  • eval_fn – An eval function that takes 2 batches of outputs and returns a float, with a max value being desired. An arithmetic mean must make sense for this metric.
  • probs – If true, eval_fn is passed probability outputs from predict_proba, otherwise the output of predict is used.
  • return_all – If True, all results are returned, if False, only the best config is returned.
  • kwargs – Keyword arguments to pass to get_config()
Returns:

default is to return the best config object. If return_all is true, it returns a list of tuples of the form [(config, eval_fn output), … ]

fit(*args, **kwargs)

An alias for finetune.

generate_text(seed_text='', max_length=None, use_extra_toks=None)

Performs a prediction on the Language modeling objective given some seed text. It uses a noisy greedy decoding. Temperature parameter for decoding is set in the config. :param max_length: The maximum length to decode to. :param seed_text: Defaults to the empty string. This will form the starting point to begin modelling :return: A string containing the generated text.

load(*args, **kwargs)

Load a saved fine-tuned model from disk. Path provided should be a folder which contains .pkl and tf.Saver() files

Parameters:
  • path – string path name to load model from. Same value as previously provided to save(). Must be a folder.
  • **kwargs

    key-value pairs of config items to override.

predict(X, probas=False)[source]

Produces a list of most likely class labels as determined by the fine-tuned model.

Parameters:X – list or array of text to embed.
Returns:list of class labels.

Chunk idx for prediction. Dividers at step_size increments. [ 1 | 1 | 2 | 3 | 3 ]

predict_proba(X)[source]

Produces a probability distribution over classes for each example in X.

Parameters:X – list or array of text to embed.
Returns:list of dictionaries. Each dictionary maps from a class label to its assigned class probability.
save(path)

Saves the state of the model to disk to the folder specific by path. If path does not exist, it will be auto-created.

Save is performed in two steps:
  • Serialize tf graph to disk using tf.Saver
  • Serialize python model using pickle
Note:
Does not serialize state of Adam optimizer. Should not be used to save / restore a training model.
transform(*args, **kwargs)

An alias for featurize.

Regressor

class finetune.Regressor(**kwargs)[source]

Regresses one or more floating point values given a single document.

For a full list of configuration options, see finetune.config.

Parameters:
  • config – A config object generated by finetune.config.get_config or None (for default config).
  • **kwargs – key-value pairs of config items to override.
cached_predict()

Context manager that prevents the recreation of the tensorflow graph on every call to BaseModel.predict().

context_span_to_label_span(batch_context_spans, batch_text_chunks=None)

Copy relevant context spans into each corresponding label span as denoted by batch_text_chunks

create_base_model(filename, exists_ok=False)

Saves the current weights into the correct file format to be used as a base model. :param filename: the path to save the base model relative to finetune’s base model filestore. :param exists_ok: Whether to replace the model if it exists.

featurize(X)[source]

Embeds inputs in learned feature space. Can be called before or after calling finetune().

Parameters:X – list or array of text to embed.
Returns:np.array of features of shape (n_examples, embedding_size).
featurize_sequence(*args, **kwargs)

Base method to get raw token-level features out of the model. These features are the same features that are fed into the target_model.

fill_in_context_gaps(X, context)

Ensure all tokens are covered by context; if they are not, fill in gaps with the default style as given from the user

finetune(X, Y=None, batch_size=None)[source]
Parameters:
  • X – list or array of text.
  • Y – floating point targets
  • batch_size – integer number of examples per batch. When N_GPUS > 1, this number corresponds to the number of training examples provided to each GPU.

Performs grid search over config items defined using “GridSearchable” objects and returns either full results or the config object that relates to the best results. The default config contains grid searchable objects for the most important parameters to search over.

Parameters:
  • Xs – Input text. Either [num_samples] or [sequence, num_samples] for single or multi input models respectively.
  • Y – Targets, A list of targets, [num_samples] that correspond to each sample in Xs.
  • test_size – Int or float. If an int is given this number of samples is used to validate, if a float is given then that fraction of samples is used.
  • eval_fn – An eval function that takes 2 inputs (prediction, truth) and returns a float, with a max value being desired.
  • probs – If true, eval_fn is passed probability outputs from predict_proba, otherwise the output of predict is used.
  • return_all – If True, all results are returned, if False, only the best config is returned.
  • kwargs – Keyword arguments to pass to get_config()
Returns:

default is to return the best config object. If return_all is true, it returns a list of tuples of the form [(config, eval_fn output), … ]

classmethod finetune_grid_search_cv(Xs, Y, *, n_splits, test_size, eval_fn=None, probs=False, return_all=False, **kwargs)

Performs cross validated grid search over config items defined using “GridSearchable” objects and returns either full results or the config object that relates to the best results. The default config contains grid searchable objects for the most important parameters to search over.

It should be noted that the cv splits are not guaranteed unique, but each split is given to each set of hparams.

Parameters:
  • Xs – Input text. Either [num_samples] or [sequence, num_samples] for single or multi input models respectively.
  • Y – Targets, A list of targets, [num_samples] that correspond to each sample in Xs.
  • n_splits – Number of CV splits to do.
  • test_size – Int or float. If an int is given this number of samples is used to validate, if a float is given then that fraction of samples is used.
  • eval_fn – An eval function that takes 2 batches of outputs and returns a float, with a max value being desired. An arithmetic mean must make sense for this metric.
  • probs – If true, eval_fn is passed probability outputs from predict_proba, otherwise the output of predict is used.
  • return_all – If True, all results are returned, if False, only the best config is returned.
  • kwargs – Keyword arguments to pass to get_config()
Returns:

default is to return the best config object. If return_all is true, it returns a list of tuples of the form [(config, eval_fn output), … ]

fit(*args, **kwargs)

An alias for finetune.

generate_text(seed_text='', max_length=None, use_extra_toks=None)

Performs a prediction on the Language modeling objective given some seed text. It uses a noisy greedy decoding. Temperature parameter for decoding is set in the config. :param max_length: The maximum length to decode to. :param seed_text: Defaults to the empty string. This will form the starting point to begin modelling :return: A string containing the generated text.

load(*args, **kwargs)

Load a saved fine-tuned model from disk. Path provided should be a folder which contains .pkl and tf.Saver() files

Parameters:
  • path – string path name to load model from. Same value as previously provided to save(). Must be a folder.
  • **kwargs

    key-value pairs of config items to override.

predict(X)[source]

Produces a list of most likely class labels as determined by the fine-tuned model.

Parameters:X – list or array of text to embed.
Returns:list of class labels.
predict_proba(X)[source]

Produces a probability distribution over classes for each example in X.

Parameters:X – list or array of text to embed.
Returns:list of dictionaries. Each dictionary maps from a class label to its assigned class probability.
save(path)

Saves the state of the model to disk to the folder specific by path. If path does not exist, it will be auto-created.

Save is performed in two steps:
  • Serialize tf graph to disk using tf.Saver
  • Serialize python model using pickle
Note:
Does not serialize state of Adam optimizer. Should not be used to save / restore a training model.
transform(*args, **kwargs)

An alias for featurize.

SequenceLabeler

class finetune.SequenceLabeler(**kwargs)[source]

Labels each token in a sequence as belonging to 1 of N token classes.

Parameters:
  • config – A finetune.config.Settings object or None (for default config).
  • **kwargs – key-value pairs of config items to override.
cached_predict()

Context manager that prevents the recreation of the tensorflow graph on every call to BaseModel.predict().

context_span_to_label_span(batch_context_spans, batch_text_chunks=None)

Copy relevant context spans into each corresponding label span as denoted by batch_text_chunks

create_base_model(filename, exists_ok=False)

Saves the current weights into the correct file format to be used as a base model. :param filename: the path to save the base model relative to finetune’s base model filestore. :param exists_ok: Whether to replace the model if it exists.

featurize(X)[source]

Embeds inputs in learned feature space. Can be called before or after calling finetune().

Parameters:Xs – An iterable of lists or array of text, shape [batch, n_inputs, tokens]
Returns:np.array of features of shape (n_examples, embedding_size).
featurize_sequence(*args, **kwargs)

Base method to get raw token-level features out of the model. These features are the same features that are fed into the target_model.

fill_in_context_gaps(X, context)

Ensure all tokens are covered by context; if they are not, fill in gaps with the default style as given from the user

Performs grid search over config items defined using “GridSearchable” objects and returns either full results or the config object that relates to the best results. The default config contains grid searchable objects for the most important parameters to search over.

Parameters:
  • Xs – Input text. Either [num_samples] or [sequence, num_samples] for single or multi input models respectively.
  • Y – Targets, A list of targets, [num_samples] that correspond to each sample in Xs.
  • test_size – Int or float. If an int is given this number of samples is used to validate, if a float is given then that fraction of samples is used.
  • eval_fn – An eval function that takes 2 inputs (prediction, truth) and returns a float, with a max value being desired.
  • probs – If true, eval_fn is passed probability outputs from predict_proba, otherwise the output of predict is used.
  • return_all – If True, all results are returned, if False, only the best config is returned.
  • kwargs – Keyword arguments to pass to get_config()
Returns:

default is to return the best config object. If return_all is true, it returns a list of tuples of the form [(config, eval_fn output), … ]

classmethod finetune_grid_search_cv(Xs, Y, *, n_splits, test_size, eval_fn=None, probs=False, return_all=False, **kwargs)

Performs cross validated grid search over config items defined using “GridSearchable” objects and returns either full results or the config object that relates to the best results. The default config contains grid searchable objects for the most important parameters to search over.

It should be noted that the cv splits are not guaranteed unique, but each split is given to each set of hparams.

Parameters:
  • Xs – Input text. Either [num_samples] or [sequence, num_samples] for single or multi input models respectively.
  • Y – Targets, A list of targets, [num_samples] that correspond to each sample in Xs.
  • n_splits – Number of CV splits to do.
  • test_size – Int or float. If an int is given this number of samples is used to validate, if a float is given then that fraction of samples is used.
  • eval_fn – An eval function that takes 2 batches of outputs and returns a float, with a max value being desired. An arithmetic mean must make sense for this metric.
  • probs – If true, eval_fn is passed probability outputs from predict_proba, otherwise the output of predict is used.
  • return_all – If True, all results are returned, if False, only the best config is returned.
  • kwargs – Keyword arguments to pass to get_config()
Returns:

default is to return the best config object. If return_all is true, it returns a list of tuples of the form [(config, eval_fn output), … ]

fit(*args, **kwargs)

An alias for finetune.

generate_text(seed_text='', max_length=None, use_extra_toks=None)

Performs a prediction on the Language modeling objective given some seed text. It uses a noisy greedy decoding. Temperature parameter for decoding is set in the config. :param max_length: The maximum length to decode to. :param seed_text: Defaults to the empty string. This will form the starting point to begin modelling :return: A string containing the generated text.

load(*args, **kwargs)

Load a saved fine-tuned model from disk. Path provided should be a folder which contains .pkl and tf.Saver() files

Parameters:
  • path – string path name to load model from. Same value as previously provided to save(). Must be a folder.
  • **kwargs

    key-value pairs of config items to override.

predict(X, per_token=False)[source]

Produces a list of most likely class labels as determined by the fine-tuned model.

Parameters:
  • X – A list / array of text, shape [batch]
  • per_token – If True, return raw probabilities and labels on a per token basis
Returns:

list of class labels.

predict_proba(X)[source]

Produces a list of most likely class labels as determined by the fine-tuned model.

Parameters:X – A list / array of text, shape [batch]
Returns:list of class labels.
save(path)

Saves the state of the model to disk to the folder specific by path. If path does not exist, it will be auto-created.

Save is performed in two steps:
  • Serialize tf graph to disk using tf.Saver
  • Serialize python model using pickle
Note:
Does not serialize state of Adam optimizer. Should not be used to save / restore a training model.
transform(*args, **kwargs)

An alias for featurize.

Association

class finetune.Association(**kwargs)[source]

Labels each token in a sequence as belonging to 1 of N token classes and then builds a set of edges between the labeled edges.

Parameters:
  • config – A finetune.config.Settings object or None (for default config).
  • **kwargs – key-value pairs of config items to override.
cached_predict()

Context manager that prevents the recreation of the tensorflow graph on every call to BaseModel.predict().

context_span_to_label_span(batch_context_spans, batch_text_chunks=None)

Copy relevant context spans into each corresponding label span as denoted by batch_text_chunks

create_base_model(filename, exists_ok=False)

Saves the current weights into the correct file format to be used as a base model. :param filename: the path to save the base model relative to finetune’s base model filestore. :param exists_ok: Whether to replace the model if it exists.

featurize(X)[source]

Embeds inputs in learned feature space. Can be called before or after calling finetune().

Parameters:Xs – An iterable of lists or array of text, shape [batch, n_inputs, tokens]
Returns:np.array of features of shape (n_examples, embedding_size).
featurize_sequence(*args, **kwargs)

Base method to get raw token-level features out of the model. These features are the same features that are fed into the target_model.

fill_in_context_gaps(X, context)

Ensure all tokens are covered by context; if they are not, fill in gaps with the default style as given from the user

finetune(Xs, Y=None, batch_size=None)[source]
Parameters:
  • Xs – A list of strings.
  • Y – A list of labels of the same format as sequence labeling but with an option al additional field

of the form: ```

{

… “association”:{

“index”: a, “relationship”: relationship_name

``` where index is the index of the relationship target into the label list and relationship_name is the type of the relationship.

Performs grid search over config items defined using “GridSearchable” objects and returns either full results or the config object that relates to the best results. The default config contains grid searchable objects for the most important parameters to search over.

Parameters:
  • Xs – Input text. Either [num_samples] or [sequence, num_samples] for single or multi input models respectively.
  • Y – Targets, A list of targets, [num_samples] that correspond to each sample in Xs.
  • test_size – Int or float. If an int is given this number of samples is used to validate, if a float is given then that fraction of samples is used.
  • eval_fn – An eval function that takes 2 inputs (prediction, truth) and returns a float, with a max value being desired.
  • probs – If true, eval_fn is passed probability outputs from predict_proba, otherwise the output of predict is used.
  • return_all – If True, all results are returned, if False, only the best config is returned.
  • kwargs – Keyword arguments to pass to get_config()
Returns:

default is to return the best config object. If return_all is true, it returns a list of tuples of the form [(config, eval_fn output), … ]

classmethod finetune_grid_search_cv(Xs, Y, *, n_splits, test_size, eval_fn=None, probs=False, return_all=False, **kwargs)

Performs cross validated grid search over config items defined using “GridSearchable” objects and returns either full results or the config object that relates to the best results. The default config contains grid searchable objects for the most important parameters to search over.

It should be noted that the cv splits are not guaranteed unique, but each split is given to each set of hparams.

Parameters:
  • Xs – Input text. Either [num_samples] or [sequence, num_samples] for single or multi input models respectively.
  • Y – Targets, A list of targets, [num_samples] that correspond to each sample in Xs.
  • n_splits – Number of CV splits to do.
  • test_size – Int or float. If an int is given this number of samples is used to validate, if a float is given then that fraction of samples is used.
  • eval_fn – An eval function that takes 2 batches of outputs and returns a float, with a max value being desired. An arithmetic mean must make sense for this metric.
  • probs – If true, eval_fn is passed probability outputs from predict_proba, otherwise the output of predict is used.
  • return_all – If True, all results are returned, if False, only the best config is returned.
  • kwargs – Keyword arguments to pass to get_config()
Returns:

default is to return the best config object. If return_all is true, it returns a list of tuples of the form [(config, eval_fn output), … ]

fit(*args, **kwargs)

An alias for finetune.

generate_text(seed_text='', max_length=None, use_extra_toks=None)

Performs a prediction on the Language modeling objective given some seed text. It uses a noisy greedy decoding. Temperature parameter for decoding is set in the config. :param max_length: The maximum length to decode to. :param seed_text: Defaults to the empty string. This will form the starting point to begin modelling :return: A string containing the generated text.

load(*args, **kwargs)

Load a saved fine-tuned model from disk. Path provided should be a folder which contains .pkl and tf.Saver() files

Parameters:
  • path – string path name to load model from. Same value as previously provided to save(). Must be a folder.
  • **kwargs

    key-value pairs of config items to override.

predict(X)[source]

Produces a list of most likely class labels as determined by the fine-tuned model.

Parameters:X – A list / array of text, shape [batch]
Returns:list of class labels.
predict_proba(X)[source]

Produces a list of most likely class labels as determined by the fine-tuned model.

Parameters:X – A list / array of text, shape [batch]
Returns:list of class labels.
save(path)

Saves the state of the model to disk to the folder specific by path. If path does not exist, it will be auto-created.

Save is performed in two steps:
  • Serialize tf graph to disk using tf.Saver
  • Serialize python model using pickle
Note:
Does not serialize state of Adam optimizer. Should not be used to save / restore a training model.
transform(*args, **kwargs)

An alias for featurize.

Comparison

class finetune.Comparison(**kwargs)[source]

Compares two documents to solve a classification task.

Parameters:
  • config – A finetune.config.Settings object or None (for default config).
  • **kwargs – key-value pairs of config items to override.
cached_predict()

Context manager that prevents the recreation of the tensorflow graph on every call to BaseModel.predict().

context_span_to_label_span(batch_context_spans, batch_text_chunks=None)

Copy relevant context spans into each corresponding label span as denoted by batch_text_chunks

create_base_model(filename, exists_ok=False)

Saves the current weights into the correct file format to be used as a base model. :param filename: the path to save the base model relative to finetune’s base model filestore. :param exists_ok: Whether to replace the model if it exists.

featurize(pairs)[source]

Embeds inputs in learned feature space. Can be called before or after calling finetune().

Parameters:pairs – Array of text, shape [batch, 2]
Returns:np.array of features of shape (n_examples, embedding_size).
featurize_sequence(*args, **kwargs)

Base method to get raw token-level features out of the model. These features are the same features that are fed into the target_model.

fill_in_context_gaps(X, context)

Ensure all tokens are covered by context; if they are not, fill in gaps with the default style as given from the user

finetune(X, Y=None, batch_size=None)
Parameters:
  • X – list or array of text.
  • Y – integer or string-valued class labels.
  • batch_size – integer number of examples per batch. When N_GPUS > 1, this number corresponds to the number of training examples provided to each GPU.

Performs grid search over config items defined using “GridSearchable” objects and returns either full results or the config object that relates to the best results. The default config contains grid searchable objects for the most important parameters to search over.

Parameters:
  • Xs – Input text. Either [num_samples] or [sequence, num_samples] for single or multi input models respectively.
  • Y – Targets, A list of targets, [num_samples] that correspond to each sample in Xs.
  • test_size – Int or float. If an int is given this number of samples is used to validate, if a float is given then that fraction of samples is used.
  • eval_fn – An eval function that takes 2 inputs (prediction, truth) and returns a float, with a max value being desired.
  • probs – If true, eval_fn is passed probability outputs from predict_proba, otherwise the output of predict is used.
  • return_all – If True, all results are returned, if False, only the best config is returned.
  • kwargs – Keyword arguments to pass to get_config()
Returns:

default is to return the best config object. If return_all is true, it returns a list of tuples of the form [(config, eval_fn output), … ]

classmethod finetune_grid_search_cv(Xs, Y, *, n_splits, test_size, eval_fn=None, probs=False, return_all=False, **kwargs)

Performs cross validated grid search over config items defined using “GridSearchable” objects and returns either full results or the config object that relates to the best results. The default config contains grid searchable objects for the most important parameters to search over.

It should be noted that the cv splits are not guaranteed unique, but each split is given to each set of hparams.

Parameters:
  • Xs – Input text. Either [num_samples] or [sequence, num_samples] for single or multi input models respectively.
  • Y – Targets, A list of targets, [num_samples] that correspond to each sample in Xs.
  • n_splits – Number of CV splits to do.
  • test_size – Int or float. If an int is given this number of samples is used to validate, if a float is given then that fraction of samples is used.
  • eval_fn – An eval function that takes 2 batches of outputs and returns a float, with a max value being desired. An arithmetic mean must make sense for this metric.
  • probs – If true, eval_fn is passed probability outputs from predict_proba, otherwise the output of predict is used.
  • return_all – If True, all results are returned, if False, only the best config is returned.
  • kwargs – Keyword arguments to pass to get_config()
Returns:

default is to return the best config object. If return_all is true, it returns a list of tuples of the form [(config, eval_fn output), … ]

fit(*args, **kwargs)

An alias for finetune.

generate_text(seed_text='', max_length=None, use_extra_toks=None)

Performs a prediction on the Language modeling objective given some seed text. It uses a noisy greedy decoding. Temperature parameter for decoding is set in the config. :param max_length: The maximum length to decode to. :param seed_text: Defaults to the empty string. This will form the starting point to begin modelling :return: A string containing the generated text.

load(*args, **kwargs)

Load a saved fine-tuned model from disk. Path provided should be a folder which contains .pkl and tf.Saver() files

Parameters:
  • path – string path name to load model from. Same value as previously provided to save(). Must be a folder.
  • **kwargs

    key-value pairs of config items to override.

predict(pairs)[source]

Produces a list of most likely class labels as determined by the fine-tuned model.

Parameters:pairs – Array of text, shape [batch, 2]
Returns:list of class labels.
predict_proba(pairs)[source]

Produces a probability distribution over classes for each example in X.

Parameters:pairs – Array of text, shape [batch, 2]
Returns:list of dictionaries. Each dictionary maps from a class label to its assigned class probability.
save(path)

Saves the state of the model to disk to the folder specific by path. If path does not exist, it will be auto-created.

Save is performed in two steps:
  • Serialize tf graph to disk using tf.Saver
  • Serialize python model using pickle
Note:
Does not serialize state of Adam optimizer. Should not be used to save / restore a training model.
transform(*args, **kwargs)

An alias for featurize.

MultiFieldClassifier

class finetune.MultiFieldClassifier(**kwargs)[source]

Classifies a set of documents into 1 of N classes.

Parameters:
  • config – A finetune.config.Settings object or None (for default config).
  • **kwargs – key-value pairs of config items to override.
cached_predict()

Context manager that prevents the recreation of the tensorflow graph on every call to BaseModel.predict().

context_span_to_label_span(batch_context_spans, batch_text_chunks=None)

Copy relevant context spans into each corresponding label span as denoted by batch_text_chunks

create_base_model(filename, exists_ok=False)

Saves the current weights into the correct file format to be used as a base model. :param filename: the path to save the base model relative to finetune’s base model filestore. :param exists_ok: Whether to replace the model if it exists.

featurize(Xs)[source]

Embeds inputs in learned feature space. Can be called before or after calling finetune().

Parameters:*Xs – lists of text inputs, shape [batch, n_fields]
Returns:np.array of features of shape (n_examples, embedding_size).
featurize_sequence(*args, **kwargs)

Base method to get raw token-level features out of the model. These features are the same features that are fed into the target_model.

fill_in_context_gaps(X, context)

Ensure all tokens are covered by context; if they are not, fill in gaps with the default style as given from the user

finetune(Xs, Y=None, batch_size=None)[source]
Parameters:
  • *Xs – lists of text inputs, shape [batch, n_fields]
  • Y – integer or string-valued class labels. It is necessary for the items of Y to be sortable.
  • batch_size – integer number of examples per batch. When N_GPUS > 1, this number corresponds to the number of training examples provided to each GPU.

Performs grid search over config items defined using “GridSearchable” objects and returns either full results or the config object that relates to the best results. The default config contains grid searchable objects for the most important parameters to search over.

Parameters:
  • Xs – Input text. Either [num_samples] or [sequence, num_samples] for single or multi input models respectively.
  • Y – Targets, A list of targets, [num_samples] that correspond to each sample in Xs.
  • test_size – Int or float. If an int is given this number of samples is used to validate, if a float is given then that fraction of samples is used.
  • eval_fn – An eval function that takes 2 inputs (prediction, truth) and returns a float, with a max value being desired.
  • probs – If true, eval_fn is passed probability outputs from predict_proba, otherwise the output of predict is used.
  • return_all – If True, all results are returned, if False, only the best config is returned.
  • kwargs – Keyword arguments to pass to get_config()
Returns:

default is to return the best config object. If return_all is true, it returns a list of tuples of the form [(config, eval_fn output), … ]

classmethod finetune_grid_search_cv(Xs, Y, *, n_splits, test_size, eval_fn=None, probs=False, return_all=False, **kwargs)

Performs cross validated grid search over config items defined using “GridSearchable” objects and returns either full results or the config object that relates to the best results. The default config contains grid searchable objects for the most important parameters to search over.

It should be noted that the cv splits are not guaranteed unique, but each split is given to each set of hparams.

Parameters:
  • Xs – Input text. Either [num_samples] or [sequence, num_samples] for single or multi input models respectively.
  • Y – Targets, A list of targets, [num_samples] that correspond to each sample in Xs.
  • n_splits – Number of CV splits to do.
  • test_size – Int or float. If an int is given this number of samples is used to validate, if a float is given then that fraction of samples is used.
  • eval_fn – An eval function that takes 2 batches of outputs and returns a float, with a max value being desired. An arithmetic mean must make sense for this metric.
  • probs – If true, eval_fn is passed probability outputs from predict_proba, otherwise the output of predict is used.
  • return_all – If True, all results are returned, if False, only the best config is returned.
  • kwargs – Keyword arguments to pass to get_config()
Returns:

default is to return the best config object. If return_all is true, it returns a list of tuples of the form [(config, eval_fn output), … ]

fit(*args, **kwargs)

An alias for finetune.

generate_text(seed_text='', max_length=None, use_extra_toks=None)

Performs a prediction on the Language modeling objective given some seed text. It uses a noisy greedy decoding. Temperature parameter for decoding is set in the config. :param max_length: The maximum length to decode to. :param seed_text: Defaults to the empty string. This will form the starting point to begin modelling :return: A string containing the generated text.

load(*args, **kwargs)

Load a saved fine-tuned model from disk. Path provided should be a folder which contains .pkl and tf.Saver() files

Parameters:
  • path – string path name to load model from. Same value as previously provided to save(). Must be a folder.
  • **kwargs

    key-value pairs of config items to override.

predict(Xs)[source]

Produces list of most likely class labels as determined by the fine-tuned model.

Parameters:*Xs – lists of text inputs, shape [batch, n_fields]
Returns:list of class labels.
predict_proba(Xs)[source]

Produces probability distribution over classes for each example in X.

Parameters:*Xs – lists of text inputs, shape [batch, n_fields]
Returns:list of dictionaries. Each dictionary maps from X2 class label to its assigned class probability.
save(path)

Saves the state of the model to disk to the folder specific by path. If path does not exist, it will be auto-created.

Save is performed in two steps:
  • Serialize tf graph to disk using tf.Saver
  • Serialize python model using pickle
Note:
Does not serialize state of Adam optimizer. Should not be used to save / restore a training model.
transform(*args, **kwargs)

An alias for featurize.

MultiFieldRegressor

class finetune.MultiFieldRegressor(**kwargs)[source]

Regresses one or more floating point values given a set of documents per example.

Parameters:
  • config – A finetune.config.Settings object or None (for default config).
  • **kwargs – key-value pairs of config items to override.
cached_predict()

Context manager that prevents the recreation of the tensorflow graph on every call to BaseModel.predict().

context_span_to_label_span(batch_context_spans, batch_text_chunks=None)

Copy relevant context spans into each corresponding label span as denoted by batch_text_chunks

create_base_model(filename, exists_ok=False)

Saves the current weights into the correct file format to be used as a base model. :param filename: the path to save the base model relative to finetune’s base model filestore. :param exists_ok: Whether to replace the model if it exists.

featurize(Xs)[source]

Embeds inputs in learned feature space. Can be called before or after calling finetune().

Parameters:*Xs – lists of text inputs, shape [batch, n_fields]
Returns:np.array of features of shape (n_examples, embedding_size).
featurize_sequence(*args, **kwargs)

Base method to get raw token-level features out of the model. These features are the same features that are fed into the target_model.

fill_in_context_gaps(X, context)

Ensure all tokens are covered by context; if they are not, fill in gaps with the default style as given from the user

finetune(Xs, Y=None, batch_size=None)[source]
Parameters:
  • *Xs – lists of text inputs, shape [batch, n_fields]
  • Y – floating point targets
  • batch_size – integer number of examples per batch. When N_GPUS > 1, this number corresponds to the number of training examples provided to each GPU.

Performs grid search over config items defined using “GridSearchable” objects and returns either full results or the config object that relates to the best results. The default config contains grid searchable objects for the most important parameters to search over.

Parameters:
  • Xs – Input text. Either [num_samples] or [sequence, num_samples] for single or multi input models respectively.
  • Y – Targets, A list of targets, [num_samples] that correspond to each sample in Xs.
  • test_size – Int or float. If an int is given this number of samples is used to validate, if a float is given then that fraction of samples is used.
  • eval_fn – An eval function that takes 2 inputs (prediction, truth) and returns a float, with a max value being desired.
  • probs – If true, eval_fn is passed probability outputs from predict_proba, otherwise the output of predict is used.
  • return_all – If True, all results are returned, if False, only the best config is returned.
  • kwargs – Keyword arguments to pass to get_config()
Returns:

default is to return the best config object. If return_all is true, it returns a list of tuples of the form [(config, eval_fn output), … ]

classmethod finetune_grid_search_cv(Xs, Y, *, n_splits, test_size, eval_fn=None, probs=False, return_all=False, **kwargs)

Performs cross validated grid search over config items defined using “GridSearchable” objects and returns either full results or the config object that relates to the best results. The default config contains grid searchable objects for the most important parameters to search over.

It should be noted that the cv splits are not guaranteed unique, but each split is given to each set of hparams.

Parameters:
  • Xs – Input text. Either [num_samples] or [sequence, num_samples] for single or multi input models respectively.
  • Y – Targets, A list of targets, [num_samples] that correspond to each sample in Xs.
  • n_splits – Number of CV splits to do.
  • test_size – Int or float. If an int is given this number of samples is used to validate, if a float is given then that fraction of samples is used.
  • eval_fn – An eval function that takes 2 batches of outputs and returns a float, with a max value being desired. An arithmetic mean must make sense for this metric.
  • probs – If true, eval_fn is passed probability outputs from predict_proba, otherwise the output of predict is used.
  • return_all – If True, all results are returned, if False, only the best config is returned.
  • kwargs – Keyword arguments to pass to get_config()
Returns:

default is to return the best config object. If return_all is true, it returns a list of tuples of the form [(config, eval_fn output), … ]

fit(*args, **kwargs)

An alias for finetune.

generate_text(seed_text='', max_length=None, use_extra_toks=None)

Performs a prediction on the Language modeling objective given some seed text. It uses a noisy greedy decoding. Temperature parameter for decoding is set in the config. :param max_length: The maximum length to decode to. :param seed_text: Defaults to the empty string. This will form the starting point to begin modelling :return: A string containing the generated text.

load(*args, **kwargs)

Load a saved fine-tuned model from disk. Path provided should be a folder which contains .pkl and tf.Saver() files

Parameters:
  • path – string path name to load model from. Same value as previously provided to save(). Must be a folder.
  • **kwargs

    key-value pairs of config items to override.

predict(Xs)[source]

Produces list of most likely class labels as determined by the fine-tuned model.

Parameters:*Xs – lists of text inputs, shape [batch, n_fields]
Returns:list of class labels.
predict_proba(Xs)[source]

Produces probability distribution over classes for each example in X.

Parameters:*Xs – lists of text inputs, shape [batch, n_fields]
Returns:list of dictionaries. Each dictionary maps from X2 class label to its assigned class probability.
save(path)

Saves the state of the model to disk to the folder specific by path. If path does not exist, it will be auto-created.

Save is performed in two steps:
  • Serialize tf graph to disk using tf.Saver
  • Serialize python model using pickle
Note:
Does not serialize state of Adam optimizer. Should not be used to save / restore a training model.
transform(*args, **kwargs)

An alias for featurize.

MultiLabelClassifier

class finetune.MultiLabelClassifier(*args, **kwargs)[source]

Classifies a single document into upto N of N categories.

Parameters:
  • config – A finetune.config.Settings object or None (for default config).
  • **kwargs – key-value pairs of config items to override.
cached_predict()

Context manager that prevents the recreation of the tensorflow graph on every call to BaseModel.predict().

context_span_to_label_span(batch_context_spans, batch_text_chunks=None)

Copy relevant context spans into each corresponding label span as denoted by batch_text_chunks

create_base_model(filename, exists_ok=False)

Saves the current weights into the correct file format to be used as a base model. :param filename: the path to save the base model relative to finetune’s base model filestore. :param exists_ok: Whether to replace the model if it exists.

featurize(X)[source]

Embeds inputs in learned feature space. Can be called before or after calling finetune().

Parameters:X – list or array of text to embed.
Returns:np.array of features of shape (n_examples, embedding_size).
featurize_sequence(*args, **kwargs)

Base method to get raw token-level features out of the model. These features are the same features that are fed into the target_model.

fill_in_context_gaps(X, context)

Ensure all tokens are covered by context; if they are not, fill in gaps with the default style as given from the user

finetune(X, Y=None, batch_size=None)[source]
Parameters:
  • X – list or array of text.
  • Y – A list of lists containing labels for the corresponding X
  • batch_size – integer number of examples per batch. When N_GPUS > 1, this number corresponds to the number of training examples provided to each GPU.

Performs grid search over config items defined using “GridSearchable” objects and returns either full results or the config object that relates to the best results. The default config contains grid searchable objects for the most important parameters to search over.

Parameters:
  • Xs – Input text. Either [num_samples] or [sequence, num_samples] for single or multi input models respectively.
  • Y – Targets, A list of targets, [num_samples] that correspond to each sample in Xs.
  • test_size – Int or float. If an int is given this number of samples is used to validate, if a float is given then that fraction of samples is used.
  • eval_fn – An eval function that takes 2 inputs (prediction, truth) and returns a float, with a max value being desired.
  • probs – If true, eval_fn is passed probability outputs from predict_proba, otherwise the output of predict is used.
  • return_all – If True, all results are returned, if False, only the best config is returned.
  • kwargs – Keyword arguments to pass to get_config()
Returns:

default is to return the best config object. If return_all is true, it returns a list of tuples of the form [(config, eval_fn output), … ]

classmethod finetune_grid_search_cv(Xs, Y, *, n_splits, test_size, eval_fn=None, probs=False, return_all=False, **kwargs)

Performs cross validated grid search over config items defined using “GridSearchable” objects and returns either full results or the config object that relates to the best results. The default config contains grid searchable objects for the most important parameters to search over.

It should be noted that the cv splits are not guaranteed unique, but each split is given to each set of hparams.

Parameters:
  • Xs – Input text. Either [num_samples] or [sequence, num_samples] for single or multi input models respectively.
  • Y – Targets, A list of targets, [num_samples] that correspond to each sample in Xs.
  • n_splits – Number of CV splits to do.
  • test_size – Int or float. If an int is given this number of samples is used to validate, if a float is given then that fraction of samples is used.
  • eval_fn – An eval function that takes 2 batches of outputs and returns a float, with a max value being desired. An arithmetic mean must make sense for this metric.
  • probs – If true, eval_fn is passed probability outputs from predict_proba, otherwise the output of predict is used.
  • return_all – If True, all results are returned, if False, only the best config is returned.
  • kwargs – Keyword arguments to pass to get_config()
Returns:

default is to return the best config object. If return_all is true, it returns a list of tuples of the form [(config, eval_fn output), … ]

fit(*args, **kwargs)

An alias for finetune.

generate_text(seed_text='', max_length=None, use_extra_toks=None)

Performs a prediction on the Language modeling objective given some seed text. It uses a noisy greedy decoding. Temperature parameter for decoding is set in the config. :param max_length: The maximum length to decode to. :param seed_text: Defaults to the empty string. This will form the starting point to begin modelling :return: A string containing the generated text.

load(*args, **kwargs)

Load a saved fine-tuned model from disk. Path provided should be a folder which contains .pkl and tf.Saver() files

Parameters:
  • path – string path name to load model from. Same value as previously provided to save(). Must be a folder.
  • **kwargs

    key-value pairs of config items to override.

predict(X, threshold=None)[source]

Produces a list of most likely class labels as determined by the fine-tuned model.

Parameters:X – list or array of text to embed.
Returns:list of class labels.
predict_proba(X)[source]

Produces a probability distribution over classes for each example in X.

Parameters:X – list or array of text to embed.
Returns:list of dictionaries. Each dictionary maps from a class label to its assigned class probability.
save(path)

Saves the state of the model to disk to the folder specific by path. If path does not exist, it will be auto-created.

Save is performed in two steps:
  • Serialize tf graph to disk using tf.Saver
  • Serialize python model using pickle
Note:
Does not serialize state of Adam optimizer. Should not be used to save / restore a training model.
transform(*args, **kwargs)

An alias for featurize.

OrdinalRegressor

class finetune.OrdinalRegressor(shared_threshold_weights=True, **kwargs)[source]

Classifies a document into two or more ordered categories.

For a full list of configuration options, see finetune.config.

Parameters:
  • config – A config object generated by finetune.config.get_config or None (for default config).
  • **kwargs – key-value pairs of config items to override.
cached_predict()

Context manager that prevents the recreation of the tensorflow graph on every call to BaseModel.predict().

context_span_to_label_span(batch_context_spans, batch_text_chunks=None)

Copy relevant context spans into each corresponding label span as denoted by batch_text_chunks

create_base_model(filename, exists_ok=False)

Saves the current weights into the correct file format to be used as a base model. :param filename: the path to save the base model relative to finetune’s base model filestore. :param exists_ok: Whether to replace the model if it exists.

featurize(X)[source]

Embeds inputs in learned feature space. Can be called before or after calling finetune().

Parameters:X – list or array of text to embed.
Returns:np.array of features of shape (n_examples, embedding_size).
featurize_sequence(*args, **kwargs)

Base method to get raw token-level features out of the model. These features are the same features that are fed into the target_model.

fill_in_context_gaps(X, context)

Ensure all tokens are covered by context; if they are not, fill in gaps with the default style as given from the user

finetune(X, Y=None, batch_size=None)[source]
Parameters:
  • X – list or array of text.
  • Y – floating point targets
  • batch_size – integer number of examples per batch. When N_GPUS > 1, this number corresponds to the number of training examples provided to each GPU.

Performs grid search over config items defined using “GridSearchable” objects and returns either full results or the config object that relates to the best results. The default config contains grid searchable objects for the most important parameters to search over.

Parameters:
  • Xs – Input text. Either [num_samples] or [sequence, num_samples] for single or multi input models respectively.
  • Y – Targets, A list of targets, [num_samples] that correspond to each sample in Xs.
  • test_size – Int or float. If an int is given this number of samples is used to validate, if a float is given then that fraction of samples is used.
  • eval_fn – An eval function that takes 2 inputs (prediction, truth) and returns a float, with a max value being desired.
  • probs – If true, eval_fn is passed probability outputs from predict_proba, otherwise the output of predict is used.
  • return_all – If True, all results are returned, if False, only the best config is returned.
  • kwargs – Keyword arguments to pass to get_config()
Returns:

default is to return the best config object. If return_all is true, it returns a list of tuples of the form [(config, eval_fn output), … ]

classmethod finetune_grid_search_cv(Xs, Y, *, n_splits, test_size, eval_fn=None, probs=False, return_all=False, **kwargs)

Performs cross validated grid search over config items defined using “GridSearchable” objects and returns either full results or the config object that relates to the best results. The default config contains grid searchable objects for the most important parameters to search over.

It should be noted that the cv splits are not guaranteed unique, but each split is given to each set of hparams.

Parameters:
  • Xs – Input text. Either [num_samples] or [sequence, num_samples] for single or multi input models respectively.
  • Y – Targets, A list of targets, [num_samples] that correspond to each sample in Xs.
  • n_splits – Number of CV splits to do.
  • test_size – Int or float. If an int is given this number of samples is used to validate, if a float is given then that fraction of samples is used.
  • eval_fn – An eval function that takes 2 batches of outputs and returns a float, with a max value being desired. An arithmetic mean must make sense for this metric.
  • probs – If true, eval_fn is passed probability outputs from predict_proba, otherwise the output of predict is used.
  • return_all – If True, all results are returned, if False, only the best config is returned.
  • kwargs – Keyword arguments to pass to get_config()
Returns:

default is to return the best config object. If return_all is true, it returns a list of tuples of the form [(config, eval_fn output), … ]

fit(*args, **kwargs)

An alias for finetune.

generate_text(seed_text='', max_length=None, use_extra_toks=None)

Performs a prediction on the Language modeling objective given some seed text. It uses a noisy greedy decoding. Temperature parameter for decoding is set in the config. :param max_length: The maximum length to decode to. :param seed_text: Defaults to the empty string. This will form the starting point to begin modelling :return: A string containing the generated text.

load(*args, **kwargs)

Load a saved fine-tuned model from disk. Path provided should be a folder which contains .pkl and tf.Saver() files

Parameters:
  • path – string path name to load model from. Same value as previously provided to save(). Must be a folder.
  • **kwargs

    key-value pairs of config items to override.

predict(X)[source]

Produces a list of most likely class labels as determined by the fine-tuned model.

Parameters:X – list or array of text to embed.
Returns:list of class labels.
predict_proba(X)[source]

Produces a probability distribution over classes for each example in X.

Parameters:X – list or array of text to embed.
Returns:list of dictionaries. Each dictionary maps from a class label to its assigned class probability.
save(path)

Saves the state of the model to disk to the folder specific by path. If path does not exist, it will be auto-created.

Save is performed in two steps:
  • Serialize tf graph to disk using tf.Saver
  • Serialize python model using pickle
Note:
Does not serialize state of Adam optimizer. Should not be used to save / restore a training model.
transform(*args, **kwargs)

An alias for featurize.

ComparisonOrdinalRegressor

class finetune.ComparisonOrdinalRegressor(shared_threshold_weights=True, **kwargs)[source]

Compares two documents and classifies into two or more ordered categories.

For a full list of configuration options, see finetune.config.

Parameters:
  • config – A config object generated by finetune.config.get_config or None (for default config).
  • **kwargs – key-value pairs of config items to override.
cached_predict()

Context manager that prevents the recreation of the tensorflow graph on every call to BaseModel.predict().

context_span_to_label_span(batch_context_spans, batch_text_chunks=None)

Copy relevant context spans into each corresponding label span as denoted by batch_text_chunks

create_base_model(filename, exists_ok=False)

Saves the current weights into the correct file format to be used as a base model. :param filename: the path to save the base model relative to finetune’s base model filestore. :param exists_ok: Whether to replace the model if it exists.

featurize(X)

Embeds inputs in learned feature space. Can be called before or after calling finetune().

Parameters:X – list or array of text to embed.
Returns:np.array of features of shape (n_examples, embedding_size).
featurize_sequence(*args, **kwargs)

Base method to get raw token-level features out of the model. These features are the same features that are fed into the target_model.

fill_in_context_gaps(X, context)

Ensure all tokens are covered by context; if they are not, fill in gaps with the default style as given from the user

finetune(X, Y=None, batch_size=None)
Parameters:
  • X – list or array of text.
  • Y – floating point targets
  • batch_size – integer number of examples per batch. When N_GPUS > 1, this number corresponds to the number of training examples provided to each GPU.

Performs grid search over config items defined using “GridSearchable” objects and returns either full results or the config object that relates to the best results. The default config contains grid searchable objects for the most important parameters to search over.

Parameters:
  • Xs – Input text. Either [num_samples] or [sequence, num_samples] for single or multi input models respectively.
  • Y – Targets, A list of targets, [num_samples] that correspond to each sample in Xs.
  • test_size – Int or float. If an int is given this number of samples is used to validate, if a float is given then that fraction of samples is used.
  • eval_fn – An eval function that takes 2 inputs (prediction, truth) and returns a float, with a max value being desired.
  • probs – If true, eval_fn is passed probability outputs from predict_proba, otherwise the output of predict is used.
  • return_all – If True, all results are returned, if False, only the best config is returned.
  • kwargs – Keyword arguments to pass to get_config()
Returns:

default is to return the best config object. If return_all is true, it returns a list of tuples of the form [(config, eval_fn output), … ]

classmethod finetune_grid_search_cv(Xs, Y, *, n_splits, test_size, eval_fn=None, probs=False, return_all=False, **kwargs)

Performs cross validated grid search over config items defined using “GridSearchable” objects and returns either full results or the config object that relates to the best results. The default config contains grid searchable objects for the most important parameters to search over.

It should be noted that the cv splits are not guaranteed unique, but each split is given to each set of hparams.

Parameters:
  • Xs – Input text. Either [num_samples] or [sequence, num_samples] for single or multi input models respectively.
  • Y – Targets, A list of targets, [num_samples] that correspond to each sample in Xs.
  • n_splits – Number of CV splits to do.
  • test_size – Int or float. If an int is given this number of samples is used to validate, if a float is given then that fraction of samples is used.
  • eval_fn – An eval function that takes 2 batches of outputs and returns a float, with a max value being desired. An arithmetic mean must make sense for this metric.
  • probs – If true, eval_fn is passed probability outputs from predict_proba, otherwise the output of predict is used.
  • return_all – If True, all results are returned, if False, only the best config is returned.
  • kwargs – Keyword arguments to pass to get_config()
Returns:

default is to return the best config object. If return_all is true, it returns a list of tuples of the form [(config, eval_fn output), … ]

fit(*args, **kwargs)

An alias for finetune.

generate_text(seed_text='', max_length=None, use_extra_toks=None)

Performs a prediction on the Language modeling objective given some seed text. It uses a noisy greedy decoding. Temperature parameter for decoding is set in the config. :param max_length: The maximum length to decode to. :param seed_text: Defaults to the empty string. This will form the starting point to begin modelling :return: A string containing the generated text.

load(*args, **kwargs)

Load a saved fine-tuned model from disk. Path provided should be a folder which contains .pkl and tf.Saver() files

Parameters:
  • path – string path name to load model from. Same value as previously provided to save(). Must be a folder.
  • **kwargs

    key-value pairs of config items to override.

predict(pairs)[source]

Produces a floating point prediction determined by the fine-tuned model.

Parameters:pairs – Array of text, shape [batch, 2]
Returns:list of floats, shape [batch]
predict_proba(X)

Produces a probability distribution over classes for each example in X.

Parameters:X – list or array of text to embed.
Returns:list of dictionaries. Each dictionary maps from a class label to its assigned class probability.
save(path)

Saves the state of the model to disk to the folder specific by path. If path does not exist, it will be auto-created.

Save is performed in two steps:
  • Serialize tf graph to disk using tf.Saver
  • Serialize python model using pickle
Note:
Does not serialize state of Adam optimizer. Should not be used to save / restore a training model.
transform(*args, **kwargs)

An alias for featurize.

MultiTask

class finetune.MultiTask(tasks, **kwargs)[source]

Target model for multi task learning. The approach used is to sample mini-batches from each task proportional to the size of the task for each dataset.

Parameters:
  • tasks – A dictionary of pairs mapping string task names to model classes. eg. {“sst”: Classifier, “ner”: SequenceLabeler}
  • **kwargs – key-value pairs of config items to override. Note: The same config is used for each base task.
cached_predict()[source]

Context manager that prevents the recreation of the tensorflow graph on every call to BaseModel.predict().

Not supported for MultiTask.

context_span_to_label_span(batch_context_spans, batch_text_chunks=None)

Copy relevant context spans into each corresponding label span as denoted by batch_text_chunks

create_base_model(filename, exists_ok=False)

Saves the current weights into the correct file format to be used as a base model. :param filename: the path to save the base model relative to finetune’s base model filestore. :param exists_ok: Whether to replace the model if it exists.

featurize(X)[source]

Runs featurization on the trained model for any of the tasks the model was trained for. Input and output formats are the same as for each of the individial tasks.

Parameters:X – A dictionary mapping from task name to data, in the format required by the task type.
Returns:A dictionary mapping from task name to the features for that task.
featurize_sequence(*args, **kwargs)

Base method to get raw token-level features out of the model. These features are the same features that are fed into the target_model.

fill_in_context_gaps(X, context)

Ensure all tokens are covered by context; if they are not, fill in gaps with the default style as given from the user

finetune(X, Y=None, batch_size=None)[source]
Parameters:
  • X – A dictionary mapping from task name to inputs in the same format required for each of the models.
  • Y – A dictionary mapping from task name to targets in the same format required for each of the models.
  • batch_size – Number of examples per batch. When N_GPUS > 1, this number corresponds to the number of training examples provided to each GPU.
Returns:

Performs grid search over config items defined using “GridSearchable” objects and returns either full results or the config object that relates to the best results. The default config contains grid searchable objects for the most important parameters to search over.

Parameters:
  • Xs – Input text. Either [num_samples] or [sequence, num_samples] for single or multi input models respectively.
  • Y – Targets, A list of targets, [num_samples] that correspond to each sample in Xs.
  • test_size – Int or float. If an int is given this number of samples is used to validate, if a float is given then that fraction of samples is used.
  • eval_fn – An eval function that takes 2 inputs (prediction, truth) and returns a float, with a max value being desired.
  • probs – If true, eval_fn is passed probability outputs from predict_proba, otherwise the output of predict is used.
  • return_all – If True, all results are returned, if False, only the best config is returned.
  • kwargs – Keyword arguments to pass to get_config()
Returns:

default is to return the best config object. If return_all is true, it returns a list of tuples of the form [(config, eval_fn output), … ]

classmethod finetune_grid_search_cv(Xs, Y, *, n_splits, test_size, eval_fn=None, probs=False, return_all=False, **kwargs)

Performs cross validated grid search over config items defined using “GridSearchable” objects and returns either full results or the config object that relates to the best results. The default config contains grid searchable objects for the most important parameters to search over.

It should be noted that the cv splits are not guaranteed unique, but each split is given to each set of hparams.

Parameters:
  • Xs – Input text. Either [num_samples] or [sequence, num_samples] for single or multi input models respectively.
  • Y – Targets, A list of targets, [num_samples] that correspond to each sample in Xs.
  • n_splits – Number of CV splits to do.
  • test_size – Int or float. If an int is given this number of samples is used to validate, if a float is given then that fraction of samples is used.
  • eval_fn – An eval function that takes 2 batches of outputs and returns a float, with a max value being desired. An arithmetic mean must make sense for this metric.
  • probs – If true, eval_fn is passed probability outputs from predict_proba, otherwise the output of predict is used.
  • return_all – If True, all results are returned, if False, only the best config is returned.
  • kwargs – Keyword arguments to pass to get_config()
Returns:

default is to return the best config object. If return_all is true, it returns a list of tuples of the form [(config, eval_fn output), … ]

fit(*args, **kwargs)

An alias for finetune.

generate_text(seed_text='', max_length=None, use_extra_toks=None)

Performs a prediction on the Language modeling objective given some seed text. It uses a noisy greedy decoding. Temperature parameter for decoding is set in the config. :param max_length: The maximum length to decode to. :param seed_text: Defaults to the empty string. This will form the starting point to begin modelling :return: A string containing the generated text.

load(*args, **kwargs)

Load a saved fine-tuned model from disk. Path provided should be a folder which contains .pkl and tf.Saver() files

Parameters:
  • path – string path name to load model from. Same value as previously provided to save(). Must be a folder.
  • **kwargs

    key-value pairs of config items to override.

predict(X)[source]

Runs inference on the trained model for any of the tasks the model was trained for. Input and output formats are the same as for each of the individial tasks.

Parameters:X – A dictionary mapping from task name to data, in the format required by the task type.
Returns:A dictionary mapping from task name to the predictions for that task.
predict_proba(X)[source]

Runs probability inference on the trained model for any of the tasks the model was trained for. Falls back to normal predict when probabilities are not available for a task, eg Regression.

Input and output formats are the same as for each of the individial tasks.

Parameters:X – A dictionary mapping from task name to data, in the format required by the task type.
Returns:A dictionary mapping from task name to the predictions for that task.
save(path)

Saves the state of the model to disk to the folder specific by path. If path does not exist, it will be auto-created.

Save is performed in two steps:
  • Serialize tf graph to disk using tf.Saver
  • Serialize python model using pickle
Note:
Does not serialize state of Adam optimizer. Should not be used to save / restore a training model.
transform(*args, **kwargs)

An alias for featurize.

LanguageModel

class finetune.LanguageModel(**kwargs)[source]

A Language Model for Finetune

Parameters:
  • config – A finetune.config.Settings object or None (for default config).
  • **kwargs – key-value pairs of config items to override.
cached_predict()

Context manager that prevents the recreation of the tensorflow graph on every call to BaseModel.predict().

context_span_to_label_span(batch_context_spans, batch_text_chunks=None)

Copy relevant context spans into each corresponding label span as denoted by batch_text_chunks

create_base_model(filename, exists_ok=False)

Saves the current weights into the correct file format to be used as a base model. :param filename: the path to save the base model relative to finetune’s base model filestore. :param exists_ok: Whether to replace the model if it exists.

featurize(X)

Embeds inputs in learned feature space. Can be called before or after calling finetune().

Parameters:X – list or array of text to embed.
Returns:np.array of features of shape (n_examples, embedding_size).
featurize_sequence(*args, **kwargs)

Base method to get raw token-level features out of the model. These features are the same features that are fed into the target_model.

fill_in_context_gaps(X, context)

Ensure all tokens are covered by context; if they are not, fill in gaps with the default style as given from the user

finetune(X, Y=None, batch_size=None)[source]
Parameters:
  • X – list or array of text.
  • Y – Not used.
  • batch_size – integer number of examples per batch. When N_GPUS > 1, this number corresponds to the number of training examples provided to each GPU.

Performs grid search over config items defined using “GridSearchable” objects and returns either full results or the config object that relates to the best results. The default config contains grid searchable objects for the most important parameters to search over.

Parameters:
  • Xs – Input text. Either [num_samples] or [sequence, num_samples] for single or multi input models respectively.
  • Y – Targets, A list of targets, [num_samples] that correspond to each sample in Xs.
  • test_size – Int or float. If an int is given this number of samples is used to validate, if a float is given then that fraction of samples is used.
  • eval_fn – An eval function that takes 2 inputs (prediction, truth) and returns a float, with a max value being desired.
  • probs – If true, eval_fn is passed probability outputs from predict_proba, otherwise the output of predict is used.
  • return_all – If True, all results are returned, if False, only the best config is returned.
  • kwargs – Keyword arguments to pass to get_config()
Returns:

default is to return the best config object. If return_all is true, it returns a list of tuples of the form [(config, eval_fn output), … ]

classmethod finetune_grid_search_cv(Xs, Y, *, n_splits, test_size, eval_fn=None, probs=False, return_all=False, **kwargs)

Performs cross validated grid search over config items defined using “GridSearchable” objects and returns either full results or the config object that relates to the best results. The default config contains grid searchable objects for the most important parameters to search over.

It should be noted that the cv splits are not guaranteed unique, but each split is given to each set of hparams.

Parameters:
  • Xs – Input text. Either [num_samples] or [sequence, num_samples] for single or multi input models respectively.
  • Y – Targets, A list of targets, [num_samples] that correspond to each sample in Xs.
  • n_splits – Number of CV splits to do.
  • test_size – Int or float. If an int is given this number of samples is used to validate, if a float is given then that fraction of samples is used.
  • eval_fn – An eval function that takes 2 batches of outputs and returns a float, with a max value being desired. An arithmetic mean must make sense for this metric.
  • probs – If true, eval_fn is passed probability outputs from predict_proba, otherwise the output of predict is used.
  • return_all – If True, all results are returned, if False, only the best config is returned.
  • kwargs – Keyword arguments to pass to get_config()
Returns:

default is to return the best config object. If return_all is true, it returns a list of tuples of the form [(config, eval_fn output), … ]

fit(*args, **kwargs)

An alias for finetune.

generate_text(seed_text='', max_length=None, use_extra_toks=None)

Performs a prediction on the Language modeling objective given some seed text. It uses a noisy greedy decoding. Temperature parameter for decoding is set in the config. :param max_length: The maximum length to decode to. :param seed_text: Defaults to the empty string. This will form the starting point to begin modelling :return: A string containing the generated text.

load(*args, **kwargs)

Load a saved fine-tuned model from disk. Path provided should be a folder which contains .pkl and tf.Saver() files

Parameters:
  • path – string path name to load model from. Same value as previously provided to save(). Must be a folder.
  • **kwargs

    key-value pairs of config items to override.

predict(X)[source]

Produces a list of most likely class labels as determined by the fine-tuned model.

Parameters:X – list or array of text to embed.
Returns:Perplexities of each of the input sentences.
predict_proba(X)[source]

Produces a probability distribution over classes for each example in X.

Parameters:X – list or array of text to embed.
Returns:list of dictionaries. Each dictionary maps from a class label to its assigned class probability.
save(path)

Saves the state of the model to disk to the folder specific by path. If path does not exist, it will be auto-created.

Save is performed in two steps:
  • Serialize tf graph to disk using tf.Saver
  • Serialize python model using pickle
Note:
Does not serialize state of Adam optimizer. Should not be used to save / restore a training model.
transform(*args, **kwargs)

An alias for featurize.

DeploymentModel

class finetune.DeploymentModel(featurizer=None, auxiliary_info=False, **kwargs)[source]

Implements inference in arbitrary tasks in a cached manner by loading weights efficiently, allowing for quick interchanging of weights while avoiding slow graph recompilation.

Parameters:
  • config – A finetune.config.Settings object or None (for default config).
  • **kwargs – key-value pairs of config items to override.
cached_predict()[source]

Context manager that prevents the recreation of the tensorflow graph on every call to BaseModel.predict().

context_span_to_label_span(batch_context_spans, batch_text_chunks=None)

Copy relevant context spans into each corresponding label span as denoted by batch_text_chunks

create_base_model(*args, **kwargs)[source]

Saves the current weights into the correct file format to be used as a base model. :param filename: the path to save the base model relative to finetune’s base model filestore. :param exists_ok: Whether to replace the model if it exists.

featurize(X)[source]

Embeds inputs in learned feature space. Can be called before or after calling finetune().

Parameters:X – list or array of text to embed.
Returns:np.array of features of shape (n_examples, embedding_size).
featurize_sequence(*args, **kwargs)

Base method to get raw token-level features out of the model. These features are the same features that are fed into the target_model.

fill_in_context_gaps(X, context)

Ensure all tokens are covered by context; if they are not, fill in gaps with the default style as given from the user

Performs grid search over config items defined using “GridSearchable” objects and returns either full results or the config object that relates to the best results. The default config contains grid searchable objects for the most important parameters to search over.

Parameters:
  • Xs – Input text. Either [num_samples] or [sequence, num_samples] for single or multi input models respectively.
  • Y – Targets, A list of targets, [num_samples] that correspond to each sample in Xs.
  • test_size – Int or float. If an int is given this number of samples is used to validate, if a float is given then that fraction of samples is used.
  • eval_fn – An eval function that takes 2 inputs (prediction, truth) and returns a float, with a max value being desired.
  • probs – If true, eval_fn is passed probability outputs from predict_proba, otherwise the output of predict is used.
  • return_all – If True, all results are returned, if False, only the best config is returned.
  • kwargs – Keyword arguments to pass to get_config()
Returns:

default is to return the best config object. If return_all is true, it returns a list of tuples of the form [(config, eval_fn output), … ]

classmethod finetune_grid_search_cv(*args, **kwargs)[source]

Performs cross validated grid search over config items defined using “GridSearchable” objects and returns either full results or the config object that relates to the best results. The default config contains grid searchable objects for the most important parameters to search over.

It should be noted that the cv splits are not guaranteed unique, but each split is given to each set of hparams.

Parameters:
  • Xs – Input text. Either [num_samples] or [sequence, num_samples] for single or multi input models respectively.
  • Y – Targets, A list of targets, [num_samples] that correspond to each sample in Xs.
  • n_splits – Number of CV splits to do.
  • test_size – Int or float. If an int is given this number of samples is used to validate, if a float is given then that fraction of samples is used.
  • eval_fn – An eval function that takes 2 batches of outputs and returns a float, with a max value being desired. An arithmetic mean must make sense for this metric.
  • probs – If true, eval_fn is passed probability outputs from predict_proba, otherwise the output of predict is used.
  • return_all – If True, all results are returned, if False, only the best config is returned.
  • kwargs – Keyword arguments to pass to get_config()
Returns:

default is to return the best config object. If return_all is true, it returns a list of tuples of the form [(config, eval_fn output), … ]

fit(*args, **kwargs)[source]

An alias for finetune.

generate_text(*args, **kwargs)[source]

Performs a prediction on the Language modeling objective given some seed text. It uses a noisy greedy decoding. Temperature parameter for decoding is set in the config. :param max_length: The maximum length to decode to. :param seed_text: Defaults to the empty string. This will form the starting point to begin modelling :return: A string containing the generated text.

load(path, **kwargs)[source]

Load a saved fine-tuned model from disk. Path provided should be a folder which contains .pkl and tf.Saver() files

Parameters:
  • path – string path name to load model from. Same value as previously provided to save(). Must be a folder.
  • **kwargs

    key-value pairs of config items to override.

load_custom_model(path)[source]

Load in target model, and either adapters or entire featurizer from file. Must be called after load_featurizer.

load_featurizer(override=False)[source]

Performs graph compilation of the featurizer, saving most compilation overhead from occurring at predict time. Should be called after initialization but BEFORE any calls to load_custom_model or predict.

predict(X, exclude_target=False)[source]

Performs inference using the weights and targets from the model in filepath used for load_custom_model.

Parameters:X – list or array of text to embed.
Returns:list of class labels.
predict_proba(X)[source]

Produces a probability distribution over classes for each example in X.

Parameters:X – list or array of text to embed.
Returns:list of dictionaries. Each dictionary maps from a class label to its assigned class probability.
save(path)[source]

Saves the state of the model to disk to the folder specific by path. If path does not exist, it will be auto-created.

Save is performed in two steps:
  • Serialize tf graph to disk using tf.Saver
  • Serialize python model using pickle
Note:
Does not serialize state of Adam optimizer. Should not be used to save / restore a training model.
transform(*args, **kwargs)

An alias for featurize.