Lazy import in Python
In this post, we are going to understand what is Lazy import in Python? Lazy import in Python refers to the process of importing many libraries at once without having to import them one by one. Have a look at the below import statements in Python.
import numpy as np import pandas as pd import matplotlib.pyplot as plt import sklearn import nltk import os import sys import re import seaborn as sns import pickle
And the list goes on. Normally in a big data science project, we need to import many more libraries in our code. In such cases, we can avoid using multiple import statements by using a single import statement which will import the pyforest library. This library, once imported, lazy-imports almost all the data science library which we’ll have to use in our project.
In order to use this library we first need to install it in our system. Use the following command to install pyforest.
pip install pyforest
To lazy import all the libraries available in pyforest we will need to import it in our program as shown in the below code.
import pyforest
There are some functions defined in this library. These have been discussed here.
active_imports()
This function does not take any parameter and returns all the libraries used in the program.
lazy_imports()
This function returns all the available libraries in pyforest.
See the below example to understand the working of pyforest and these functions.
import pyforest array = np.array([1, 2, 3]) #numpy print(active_imports())
Output:
import numpy as np ['import numpy as np']
The below program prints all the available libraries in pyforest.
import pyforest #all avilable imports print(lazy_imports())
Output:
['import glob', 'import nltk', 'import plotly as py', 'import plotly.express as px', 'import xgboost as xgb', 'import datetime as dt', 'from sklearn.ensemble import GradientBoostingRegressor', 'import matplotlib.pyplot as plt', 'from dask import dataframe as dd', 'from sklearn.ensemble import RandomForestRegressor', 'import pickle', 'from sklearn.ensemble import GradientBoostingClassifier', 'import re', 'from sklearn.ensemble import RandomForestClassifier', 'import plotly.graph_objs as go', 'import spacy', 'import pydot', 'from sklearn.feature_extraction.text import TfidfVectorizer', 'from sklearn.manifold import TSNE', 'import pandas as pd', 'import sys', 'import matplotlib as mpl', 'from sklearn.model_selection import train_test_split', 'import os', 'import awswrangler as wr', 'import gensim', 'from sklearn.preprocessing import OneHotEncoder', 'import tensorflow as tf', 'import altair as alt', 'import lightgbm as lgb', 'from pathlib import Path', 'import statistics', 'import bokeh', 'from openpyxl import load_workbook', 'import dash', 'import sklearn', 'from pyspark import SparkContext', 'import keras', 'import seaborn as sns', 'import tqdm', 'from sklearn import svm']
Thank you.
Also read: Introduction to NLTK: Tokenization, Stemming, Lemmatization, POS Tagging
Leave a Reply