Before I get in for whom is this certification and how I cleared, let me get straight to the exam information.
Vemana Poem Generation using LSTMs in Tensorflow
I have been exploring more on text generation using bi-directional LSTM in Tensorflow from Laurence Moroney course and tried the same in the Telugu language.
I have uploaded the clean dataset of 15 poems with English typing in Kaggle
The approach is simple, we use only layer of LSTM as the data is too small.
#Read as list of sentences and remove new line spaces if any
lines = [line.rstrip() for line in open(‘poems.txt’)]
lines = [line for line in lines if line!='']
Create and Fit our tokenizer
tokenizer = Tokenizer()
In this blog, I will try to explain how to write native code for Linear Regression in Python.
1.We will start of by writing equation function , given a single value of x , it’s weight and the intercept. Y = B0 + B1 * X
y = B0 + B1 *x
2.Caluculate the error/delta for each instance
#Delta/error for Gradient Descent
error = y_pred-y
3.Add Gradient Descent Step
#Gradient Descent Step
B0 = B0 - (alpha * error)
B1 = B1 - (alpha* error * x)
Owing to the rising demand of AI/ML, the popularity of programming languages majorly used for Data Science is still growing day by day.
Majority of the Data Scientists/Analysts/Developers use Python and R with a visualization tool to build their data solutions.
Being an ML Developer my job is to take the raw code(code in jupyter) given by the analyst, optimize it, write some pipelines to reduce code overhead, save graphs, models programmatically and expose it as an REST API. In this process, majority of the people use Tensorflow 2.0/Pytorch to implement their solution, where as some people build solution entirely…
With all the focus now on AI and ML after the era of big-data it is really important to understand how do we build efficient and distributed ML Solutions over a big-data environment. This is not an installation tutorial.
What is Py-Spark and Spark-R?
Py-Spark is a driver library to enable python to communicate with Spark Cluster. Similarly, for R it is sparkr package. Both are developed by apache spark.
How hard is it to configure ?
It is basically a straight forward process, you need to install python(preferably Anaconda) in all the nodes of your machine at a same…
House sale prices for King County homes sold between May 2014 and May 2015.
Build a model that predicts the price of a house, given set of features of the house.
Tensorflow 2.0 with Keras — ReLU
import pandas as pd
import numpy as np
import seaborn as sns
from matplotlib import pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
2.Exploratory Data Analysis
data = pd.read_csv('DATA/kc_house_data.csv')
(i)Peak of Data
Owing to the impact of Covid-19,most of the people vacated the cities and moved to their home towns and villages.How interesting and useful it would be if we can analyze the pollution levels in the city pre-covid and covid.
Now before we move on, this is not a Machine Learning or Deep Learning use case for now,it is just an simple analysis of data and how the Covid19 affected the pollution.
The data was collected from the Central Control Room for Air Quality Management — All India official Website and monitoring centre at Central University, Hyderabad — TSPCB.
Professional ML Developer | DL Enthusiast