Job Description
Data Scientist Bangalore No. of Position: 4 Preferred to be in onshore, if not offshore India but candidate should be based out of Bangalore Interview Process: Single round, Face-to-Face only Experience: 56 years Profile Expectation: Strong hands-on, hardcore development-oriented candidates Required Skills: Strong proficiency in Python (Pandas, NumPy, Scikit-learn) Strong SQL skills for data extraction and analysis Hands-on experience in Machine Learning (regression, classification, clustering) Solid understanding of statistics and probability Experience in data cleaning, feature engineering, and model evaluation Knowledge of time series analysis and forecasting Tools & Platforms: Python libraries: Scikit-learn, TensorFlow / PyTorch (preferred) Data visualization: Power BI, Tableau, Matplotlib, Seaborn Big data exposure: Spark / PySpark (good to have) Version control: Git / GitHub Cloud exposure: AWS, Azure, or GCP Data platforms: Snowflake / BigQuery / Redshift (preferred) Understanding of ETL and data pipelines Business & Domain Exposure: Ability to convert business problems into data-driven solutions Experience working with large, real-world datasets Strong analytical, communication, and stakeholder management skills Domain exposure to Banking, Insurance, Retail, or Telecom is a plus Experience in risk modeling, customer analytics, or fraud detection is desirable Awareness of data privacy and compliance standards (POPIA knowledge is an advantage)
Bachelors
5
Tools & Platforms: Python libraries: Scikit-learn, TensorFlow / PyTorch (preferred) Data visualization: Power BI, Tableau, Matplotlib, Seaborn Big data exposure: Spark / PySpark (good to have) Version control: Git / GitHub Cloud exposure: AWS, Azure, or GCP Data platforms: Snowflake / BigQuery / Redshift (preferred) Understanding of ETL and data pipelines
Bachelors
5
Tools & Platforms: Python libraries: Scikit-learn, TensorFlow / PyTorch (preferred) Data visualization: Power BI, Tableau, Matplotlib, Seaborn Big data exposure: Spark / PySpark (good to have) Version control: Git / GitHub Cloud exposure: AWS, Azure, or GCP Data platforms: Snowflake / BigQuery / Redshift (preferred) Understanding of ETL and data pipelines
Apply for this Position
Ready to join ? Click the button below to submit your application.
Submit Application