miercuri, 30 noiembrie 2022

FotoFinder - detectarea timpurie a cancerului de piele

 FotoFinder Systems GmbH, producător de sisteme medicale imagistice pentru dermatologie, anunţă lansarea bodystudio ATBM master. Sistemul permite pentru prima dată utilizarea Total Body Dermoscopy, o dezvoltare avansată a tehnologiei Automated Total Body Mapping (ATBM®). Esenţa sa constă în interacţiunea inteligentă dintre hardware, tehnologia de imagine şi aplicaţiile software speciale, potrivit unui comunicat al companiei.

(Automatic Total Body Mapping) reprezintă standardul de excelență în dermatoscopia digitală, dovedindu-se un instrument extrem de eficient pentru depistarea cancerului de piele, dar și a altor anomalii ale pielii. Acest sistem complex investighează în detaliu tegumentul și determină evoluția în timp a nevilor pigmentari (alunițelor).

Studiile efectuate la nivel global indică o incidență foarte mare a cancerului de piele. De aceea, devine esențială inspectarea periodică a pielii! FotoFinder încorporează cea mai avansată tehnologie care permite analizarea tuturor modificărilor suferite de piele și luarea măsurilor necesare pentru prevenirea îmbolnăvirii.

FotoFinder este un sistem complex, compus din aparatură performantă. Camera foto profesională fotografiază zona afectată și stochează pozele automat în memoria calculatorului pentru analizarea lor prealabilă de către medicul dermatolog. Medicam-ul integrat este dotat cu lentile speciale care măresc leziunile tegumentare de până la 70 de ori, analizându-le în profunzime pentru a determina dacă prezintă vreun risc de îmbolnăvire sau nu.



Aplicaţia software profesionistă, bazată pe inteligența artificială, Moleanalyzer pro, asistă medicii în analizarea şi evaluarea de risc a semnelor din naştere. Sistemul se bazează pe unul din cei mai puternici algoritmi de deep learning care au fost evaluaţi până acum în teste clinice. În cadrul studiului “Man against Machine”* realizat la University Hospital din Heidelberg, algoritmul FotoFinder a înregistrat scoruri impresionant de ridicate pentru sensibilitate şi specificitate, şi poate concura chiar și cu experţi în dermatoscopie în termeni de calitate a diagnosticului.

Toate sistemele FotoFinder sunt fabricate în propria unitate de producţie FotoFinder din Bad Birnbach, Bavaria (Germania). FotoFinder este certificat DIN EN ISO 13485:2016 şi a primit deja mai multe premii pentru corporate management şi design.

FotoFinder integrează programe focusate de investigare corporală. Acestea sunt:


  • Bodyscan: Scopul acestui program este de a detecta noile leziuni tegumentare apărute pe corp. Compară automat pozele inițiale cu cele realizate la următoarea sesiune și semnalează modificările intervenite, recomandând spre investigare amănunțită nevii pigmentari cu structură diferită. .
  • Dynamole: Acest program contribuie la detectarea prematură a melanoamelor maligne. Analizează în profunzime structura, mărimea și culoarea leziunilor pigmentare și compară imaginile inițiale cu cele realizate la următorul control, contribuind la emiterea diagnosticului final.


Cine ar trebui să își facă o examinare cu Fotofinder?

Dacă vă regăsiți în cazurile de mai jos, programați-vă acum o vizită la medicul dermatolog:

 Aveți pe corp alunițe în număr mare (mai mult de 50)
 Aveți în familie cazuri de cancer de piele
 Vă confruntați deja cu un melanom
 Aveți alunițe de dimensiuni mari
 Ați observat schimbări în aspectul nevilor pigmentari
 Ați observat apariția unor noi nevi pigmentari
 V-ați confruntat cu arsuri solare în copilărie sau adolescență
 Aveți o piele foarte deschisă la culoare (fototip I)
Regula ABCDE pentru identificarea structurilor pigmentare care prezintă un risc pentru sănătate:
A- Asimetria nevilor pigmentari

B- Bordurile sunt neregulate

C- Culoarea diferită față de cea inițială

D- Diametru mai mare de 50 de mm

E- Excrescența- umflătură care apare peste structura pigmentară deja existentă


Bibliografie:

https://elos.ro/tratamente/dermatoscopie-digitala/fotofinder/

https://www.google.com/amp/s/financialintelligence.ro/fotofinder-prezinta-generatia-urmatoare-de-sisteme-pentru-detectarea-timpurie-a-cancerului-de-piele/amp/

miercuri, 23 noiembrie 2022

Anipuppy - nose print identification using deep learning

 


A South Korean company has developed a biometric recognition tool allowing dogs to be identified by their nose prints.

Once pet owners register the nose pattern and general information of their dog into an app called "Anipuppy", the information can be easily recalled by scanning the dog's nose print.

“It's a 3D biometric algorithm based on AI (artificial intelligence) and deep learning that we have now put into smartphones so that you can take pictures of the nose patterns and use it to identify each animal," said Sujin Choi, director of iSciLab Corporation.

With the new technology, which the company says is 99.9 per cent accurate, people who find lost dogs can quickly and directly communicate with their owners.


Making dog registrations easier

Currently, it is mandatory to register pets with a microchip or an external ID in South Korea. However, the country hasn’t seen an increase in registration since the introduction of the pet registration system in 2007.

Only 38 per cent of the nation's 6 million pet dogs were registered, according to a 2020 report by the South Korean Ministry of Agriculture, Food and Rural Affairs. Animal rights experts say pet owners are hesitant as they are concerned about the cumbersome process and potential health problems of microchip implants.

That's where the nose ID solution comes in handy - it's not intrusive and much quicker to administer than inserting a chip. iSciLab has been collaborating with the South Korean government since 2019 to develop and test its nose ID technology for commercialisation. 

The project aims to be completed by 2024 so it can become an official dog identification and registration method for the country's database. The company expects to charge around €14 per dog.

It says that in the future, the technology could be used to identify other animals such as cats, cows and deer.


Bibliography: 

Defect Detection using Machine Learning

  Computer vision is a field of Artificial Intelligence, which gives the machines one of the five senses – the sense of sight. However, from a technical point of view it enables various machines to process visual inputs (digital images, videos, etc.), in order gather useful knowledge.

  Even though the computer vision field has many applications ranging from healthcare, where it is used in detection of anomalies in MRI and X-Ray scans, to transportation, where it is used in self-driving cars and pedestrian detection, the main subject of this article is defect detection.

  Defect detection is an application of the computer vision, which equips a system with different tools in order to detect abnormalities of objects: inconsistencies in dimensions, colours, shapes, parts which are bent, cracked, scratched or having misprints, etc.


  The main purpose of the defect detection application, in fields such as retail and manufacturing, is to create an automated system capable of aiding and even replacing, the human component in the process.
    
    The illustration below shows a few scenarios where the defect detection is needed:

  There are numerous benefits when an automated system for defect detection is created:

  • Low Error Rate – since manual inspection requires the presence of a person and is subjective to the inspector, some imperfections can be attributed to human error.
  • Faster – in comparison with the manual inspection, an automated system does not get tired and is not more likely to omit defects, as time goes on.
  • Low Hazard Risk – inspections are executed in a lot of environments, which sometimes can be potentially hazardous to humans, however, for an automated system, the risk is non-existent.
  • Precision – it is well-known that the human eye is incapable of making precise measurements, especially on a very small scale, therefore an automated system with multiple cameras will be more precise compared with a person.
  • Cost of labour – an automated system, which is precise, fast, and has an low error rate is a costly acquisition, however in time, it costs less than the manual inspections.
  One of the creators of such automated system, is Mitutoyo, which equipped artificial intelligence with computer vision tools, to solve the problem defect detection with a high accuracy defect detection.

  On a final note, defect detection is one major point in the quality assurance of the products, and making it stricter by eliminating the elements prone to errors, can lead to the creation of almost perfect products.


References:

miercuri, 16 noiembrie 2022

Violence Prediction and Recognition for Video Security Applications


    Viisights is an Israeli company that provides AI-powered behavioral understanding systems for safe and smart cities, enterprises, campuses, banks, financial institutions, critical infrastructures, transportation hubs and autonomous mobility. Their mission is to use artificial intelligence technologies to facilitate human-like pattern prediction to create fully autonomous video intelligence systems with the ability to identify context and notify operators when an event of interest occurs. Viisights addresses a wide range of applications, including violence and weapon recognition, context-related suspicious activity recognition, crowd behavior and social-distancing, traffic monitoring, indoor and outdoor safety (including fire and smoke detection).

 

Some of the behaviors that Viisghts technologies can identify and predict: 





Violence Recognition - Viisights advanced behavioral analytics identify physical altercations, defined as at least two people who are hitting each other. The video analytics identify hitting as hands moving towards the other person, touching and then disengaging, with the action being repeated numerous times. It can also recognize differing fighting styles including kicking, punching or wrestling in both indoor and outdoor settings.


Weapons Detection - Viisights wise can recognize a person holding a weapon in a threating position. The presence of the weapon alone, not being held by a person in a threatening position, would not trigger an alert, but will when brandished or prepared for use. The behavioral analytics detect the combination of the position and the existence of the specific weapon, helping to eliminate false alarms.


Crowd In-Action - Large groups of people can behave in a wide variety of ways, most of them non-violent. Wise, an advanced behavioral recognition system, can learn to recognize suspicious behavior that can escalate into violent action in a crowd. For example, a large group of people pushing one another or otherwise acting aggressively for a specified period of time would trigger an alert. This allows security teams to take action before the situation erupts into a riot. 


Vandalism - Vandalism costs tax payers and business owners is prepared to identify and alert system operators to end it before it escalates. By detecting an individual or group of people repeatedly throwing objects or hitting a surface, Viisights is able to identify vandalism in progress and alert operator to take appropriate action immediately.


The technology used by Viisights is based on a unique implementation of deep neural networks. These AI-driven networks are capable of analyzing and deducting high-level concepts derived from video content.


Viisights technology recognizes the behavior of diverse objects, as well as its relevant contexts. For example, the system is capable of recognizing an individual moving back and forth in a predefined area. Such a behavior may be “loitering”. However, the way people behave at a bus stop is different from their behavior near an ATM machine. Viisights intelligent video analytics system automatically identifies the location type – bus stop or ATM – without any manual setup or calibration. The combination of human behavior (e.g. moving back and forth) and location type results in a unique insight that classifies the movement either as loitering or as something entirely different.


Viisights overcomes these challenges in a two-folded approach:

  • Using NVIDIA GPU processors which provide the demanding processing power required by the system.

  • Incorporating a unique system architecture that significantly shortens the processing time of each analysis aspect, thus allowing the system to complete the holistic analysis in near real time.


Our intelligent behavioral recognition video understanding technology utilizes a unique orchestrated architecture that enables the effective implementation of advanced deep neural networks. This architecture supports multiple holistic views by using:


  • Multi-scale image analysis


  • Time-aware analysis


  • Smart integration between object detection and the tracking mechanism


  • Innovative object detection structure that accelerates performance and increases accuracy


Product model - Viisights behavioral recognition video understanding technology leverages the classical deep learning model, while implementing unique training tools and methodologies to reduce the amount of data required for training.


Viisights uses NVIDIA GPU in both the training and process and in the production system. In the training and system it uses the Tesla P100 GPU and in the production, it uses several types of GPU depends on feature set configuration and the required workload. By such choice of hardware, viisights combines the power of the advanced GPU with affordable cost in the production environment.

In conclusion, through the power of Artificial Intelligence (AI), machine learning and deep learning, Viisights can identify and understand a wide range of behaviors and potential threats, including the ability to detect violence in real time. These innovative behavioral recognition video analytics are helping organizations and municipalities around the world to help minimize and prevent crime, and other risks.



Bibliography for the project


  1. https://www.viisights.com/products/wise/violent-activity/


  1. https://www.milestonesys.com/marketplace/viisights/ 

 

  1. https://intl.convergint.com/apps/viisights/ 

 

  1. https://www.milestonesys.com/marketplace/viisights/viisights-wise-/ 




 

marți, 15 noiembrie 2022

Emotion Recognition using Deep Learning

    One of the singularities of human beings that have contributed enormously to the development and growth of mankind is the ability to communicate precisely with rich and powerful spoken (and later in history, also written) languages. Having said that, a significant percentage of what is communicated does not circulate through those languages, but through nonverbal cues. These cues can be in the form of gestures performed, for example, with the hands; or also facial expressions that convey information about what is inside but not necessarily spoken.

    Given how relevant facial expressions as well as speech itself have been to human interactions, it is not surprising that they have been researched for centuries. In [1] it is described how studies on facial expressions were already performed in the Aristotelian era —4th century BC.

    Until the 21st century, the fast-growing computer multimedia technology and the continuous breakthrough in the field of AI technologies have made great progress in speech-based emotion recognition. The traditional machine learning algorithms based on Gaussian mixture model, support vector machine (SVM), and artificial neural networks have achieved brilliant results in speech-based emotion recognition tasks. However, the traditional machine learning algorithms have some defects in the accuracy of emotion recognition by speech and images. Improving the accuracy of emotion recognition by speech and images based on existing technologies is a critical goal of AI and deep learning algorithms.

    As a deep neural network most commonly used to analyze visual images, CNN can greatly reduce the number of parameters in operation due to the parameter sharing mechanism, so it is widely used in image and video recognition technology. In a CNN, the input layer inputs data. For Speech or Image data, we usually convert them into a feature vector, and then input it into the neural network, and the convolution kernel in the convolution layer performs the convolution operation on the input of the upper layer and the data of this layer. Through local connection and global sharing, CNN greatly reduces the number of parameters, and enhances the learning efficiency.

    Through multi-layer convolution operation, the data extracted from low-level features is input into the linear rectification layer and pooling layer for down-sampling. The pooled data cannot only further reduce the network training parameters, but also strengthen the fitting degree of the model to a certain extent. Finally, the full connection layer transfers the input data to neurons, and the output layer outputs the final result. Figure 1 displays the whole operation process of CNN.

 


Fig 1. Structure of the CNN model taken from [6]

In the past two years, emotion AI vendors have moved into completely new areas and industries, helping organizations to create a better customer experience and unlock real cost savings. These uses include:

  1. Video gaming. Using computer vision, the game console/video game detects emotions via facial expressions during the game and adapts to it.[2]
  2. Medical diagnosis. Software can help doctors with the diagnosis of diseases such as Alexithymia by using face analysis.[3]
  3. Education. Learning software prototypes have been developed to adapt to kids’ emotions. When the child shows frustration because a task is too difficult or too simple, the program adapts the task so it becomes less or more challenging. Another learning system helps autistic children recognize other people's emotions.[4]
  4. Employee safety. Based on Gartner client inquiries, demand for employee safety solutions are on the rise. Emotion AI can help to analyze the stress and anxiety levels of employees who have very demanding jobs such as first responders.[5]

References: 

[1] Bettadapura, Vinay. "Face expression recognition and analysis: the state of the art." arXiv preprint arXiv: 1203.6722 (2012).

[2] Aggag, Ahmed & Revett, Kenneth. (2011). Affective gaming: a GSR based approach. 262-266.

[3] Facial emotion recognition and alexithymia in adults with somatoform disorders Pedrosa Gil, F.; Ridout, N.; Kessler, H.; Neuffer, M.; Schoechlin, C.; Traue, H.C.; Nickel, M. Depression and Anxiety 25(11): E133-E141 2007

[4] M. Bouhlal, K. Aarika, R. Ait Abdelouahid, S. Elfilali, E. Benlahmar, Emotions recognition as innovative tool for improving students’ performance and learning approaches, Procedia Computer Science, Volume 175, 2020

[5] https://www.unite.ai/recognizing-employee-stress-through-facial-analysis-at-work/

[6] https://www.frontiersin.org/articles/10.3389/fpsyg.2021.818833/full

 

miercuri, 9 noiembrie 2022

Car License Plate Recognition

Introduction

    Since the areas of machine learning and artificial intelligence get more and more fame and development many tasks of a specialist, in whatever domain he is, are getting either automated or at least machine assisted. In some domains of expertise we deal with images and sometimes images are a little hard and occasionally time-consuming to process, especially if we have some small unclear photos of a scene. What if the computer could process the image, deal with the little details, extract the information and abolish the manual insertion of that information. Since Optical Character Recognition is a thing we could extract a good deal of text from images without killing our eyes and trying to get a string out. However what if we use this thing on cars..?

From AI to Real Hardware

    The whole process of capturing a license plate using AI consists in different image processing phases which are: 

  • Preprocess: preprocessing the captured image with different methods like: Gaussian Blur, Rotations, Enhancements, Projections, etc.
  • Segmentation: finding the region in the picture where the license plate is.
  • Recognition: Using the model to detect the letters and numbers on the plate in different fonts. Eventually syntactic rules can be used too.

     This whole process of using OCR (Optical Character Recognition) on Recognition of car license plates with all the preprocess and segmentation phases are popularly called Automatic Number-Plate Recognition (ANPR) or simply License-Plate Recognition (LPR) and it is mostly embedded in surveillance cameras which are placed in fixed points or on other vehicles to monitor the traffic or track cars wherever you need.

Motorola's L5F fixed LPR Camera (Left)
LPR Camera Mounted on a Police Car (Right)

We have the AI, we have the Hardware, now what?

    These system capable of detecting plate numbers have a good bunch of applications in the real world. As people are more and more depended on their cars we can apply these license plate detecting systems in:

  • Security & Forensics: law enforcement uses such technology on their cameras. They can track stolen vehicles, wanted individuals, monitor the traffic ahead of the unit (speed-cameras).
  • Vehicle Access Control: automated gate opening using license-plate recognition, associating plates with vehicles of persons in an organization and letting those in a certain space and not those from outside.
  • Ticketless Parking: cameras scan for vehicles license plates to take different actions automatically in a parking lot. Vehicle Access Control comes into play and moreover we can have different automated tasks like sending alerts if the vehicle left the parking lot, to pay digitally without the need of emitting a ticket.
    The benefits this automation brings are numerous. For example, for the ticketless parking we could reduce substantially the waiting time because we could give up on emitting parking tickets, no need to wait for the barrier guy to open up, etc. Also if the computer can do all of this automatically you would not need human resources to open barriers and read tickets so this would be more cost effective. Thus the benefits of having License-Plate Recognition are cost and time efficiency. Moreover automated systems like this can leave more room for improvements (improved security, new features like alerts on smartphones, etc).
 
 

References:

  • Survision, License Plate Recognition, URL: https://survisiongroup.com/post-what-is-license-plate-recognition, Last Accessed: 8.11.2022
  • J. A. G. Nijhuis et al., "Car license plate recognition with neural networks and fuzzy logic," Proceedings of ICNN'95 - International Conference on Neural Networks, 1995, pp. 2232-2236 vol.5, doi: 10.1109/ICNN.1995.487708.
  • Motoro Fixed License-Plated Recognition Camera Website, URL: https://www.motorolasolutions.com/en_us/video-security-access-control/license-plate-recognition-camera-systems/l5f-fixed-lpr-camera-story.html, Last Accessed: 9.11.2022
  • Fu, Y. (2019). Automatic License Plate Recognition Using Neural Network and Signal Processing. UC Riverside. ProQuest ID: Fu_ucr_0032N_13706. Merritt ID: ark:/13030/m5cv9gq1. Retrieved from https://escholarship.org/uc/item/57q846r5

Deep Learning Super Sampling


Video game graphics are constantly evolving, becoming more and more computationally intensive. Complex particle physics, real time lightning using ray-tracing effects, photo realistic textures and materials, all contribute to the improvements of graphics. Although these improvements are impressive, the processing power requirements from graphical processing units (GPUs) are becoming so high, that hardware limitations are becoming an issue. GPUs have plateaued by reaching atomic level transistor sizes, meaning that to further increase the power, the size and power consumption must increase at an exponential level.


Image quality has improved by solving issues such as the aliasing problem. Jagged and pixelated edges from continuous smooth curves and lines can drastically reduce the image quality. Traditionally, this was solved by rendering the image at a higher resolution than the one being displayed, then scaling it back down leaving us with extra pixels which are used for different color calculations. This is an extremely intensive process that drastically decreases the performance of video games. To overcome this issue, new methods of improving performance have been devised, using new technologies such as deep learning. 


DLSS (Deep Learning Super Sampling), is an NVIDIA artificial intelligence assisted broadcasting feature, which with the help of dedicated AI processors (Tensor cores) from NVIDIA GPUs, render fewer pixels and use AI to construct sharp and higher resolution images.



Trained by a NVIDIA supercomputer, the AI network receives two primary inputs:

  1. Low resolution, aliased images rendered by the game engine

  2. Low resolution, motion vectors from the same images -- also generated by the game engine


The mentioned AI network is a special type called a convolutional autoencoder, which uses the motion vectors from the previous frame to predict what the next frame will look like. This process is called ‘temporal feedback’. In other words the AI takes the low resolution current frame, and the high resolution previous frame, to determine how to generate a higher quality current frame.


To train such a network, NVIDIA takes the output image and compares it to an offline rendered, ultra-quality 16K reference image, and the difference is communicated back into the network so that it can improve. This process is repeated tens of thousands of times. This however requires special processing units found on NVIDIA GPUs, which offer 110 teraflops of dedicated AI horsepower. 


With the help of DLSS, the video game industry managed to considerably increase the quality of the images, whilst no longer worrying about huge performance losses from traditional supersampling algorithms. Further developments in the supersampling field can be expected from other manufacturers such as AMD, although still far behind what NVIDIA has managed to achieve.


References: 

[1] Watson, Alexander. "Deep learning techniques for super-resolution in video games." arXiv preprint arXiv:2012.09810 (2020).


[2] https://www.nvidia.com/en-us/data-center/tensor-cores/


[3] https://en.wikipedia.org/wiki/Supersampling

[4] https://www.nvidia.com/en-us/geforce/news/nvidia-dlss-2-0-a-big-leap-in-ai-rendering/

marți, 8 noiembrie 2022

Dynamic pricing using Machine Learning

 In this quickly developing digital economy, businesses reap the benefits from a vast amount of data by using dynamic pricing to change prices in real-time. Dynamic pricing is the technique of determining a product or service's price based on the state of the market.

Dynamic pricing can be used in various price setting methods:

• Cost-based pricing, which maintains profit margins at a predetermined level while constantly adjusting prices in accordance with business costs.

• Competitor-based pricing, which considers pricing choices made by competitors.

• Demand-based pricing, which makes the prices rise as consumer demand rises and supply declines, and vice versa.

Setting the right price for a good or service is an old problem in economic theory. Many pricing tactics exist, and they vary depending on the goal being pursued. One company may seek to maximize profitability on each unit sold or on the overall market share, while another company needs to access a new market or to protect an existing one. Additionally, many situations can coexist in the same business, for various commodities or customer segments.

Some of the most important questions that retailers frequently have are:

• If we want to sell all of our stock in less than a week, what price should we set?

• In light of the current status of the market, the season, the competition and other factors, what is the reasonable pricing for this product?

 

The vast majority of pricing algorithms estimate the demand function using historical sales data. The four key stages of a typical pricing algorithm's workflow are as follows:

• The engine consumes historical information on price points and demand for specific items to process it using the dynamic pricing algorithm.

• The demand function is built based on discovered dependencies.

• To provide ideal prices, it analyzes hundreds of pricing and non-pricing elements.

• The algorithm repeats the cycle once the suggested prices are applied, accounting for the most recent repricing outcomes.

Most dynamic pricing engines are based on two-stage machine learning. The first stage implies calculating the precise effect of price changes on sales. The price optimization stage uses the results of the first stage to recommend prices for the whole portfolio.

Because of the complexity of dynamic pricing, different modules are used depending on the demand. 

 Fig. 1: Module usage. Source: McKinsey & Company

 

Long tail module

This module is for new products or products with little or no historical data. The main challenge for this module is to use product attributes to match the products with little purchase data with the products that have rich purchase data, so the prices can be informed by the related rich data. 

Elasticity module

The elasticity module accounts for seasonality when calculating the effect of price on demand.

Key Value Items (KVI) module

Key value items are popular items whose prices consumers tend to remember more than other items (for a grocery store, this would be eggs, milk and bread). By making sure that goods that have a significant impact on a customer's sense of pricing are priced appropriately, the KVI module seeks to manage consumer price perception. For grocery companies, this is crucial.

Competitive-response module

This module uses detailed pricing information from competitors and the effect those prices have on the company's customers to respond in real-time.

Omnichannel module

Companies set different rates for different channels in order to both price discriminate and entice customers to use less expensive channels. Omnichannel modules ensure that prices in different channels are coordinated.

 

Considering all the benefits it provides to businesses, dynamic pricing will likely entirely replace fixed prices in the near future. The dynamics of the strategy may change to put more emphasis on understanding customers and the impact the price has on their decisions. 

 

References:

    [1] Bright. (2022, June 23). How Machine Learning Is Helping In Providing Dynamic Pricing. Medium. https://medium.com/total-data-science/how-machine-learning-is-helping-in-providing-dynamic-pricing-7efdb8af9083

    [2] Dynamic pricing. (n.d.). Big Data Analysis. http://ibigdata.in/works/dynamic-pricing/

Analysis of neural networks-based heart disease prediction system

Zoltan Szucs Heart disease is one of the major reasons for the increase in death rates. Healthcare is one amongst the most important benefic...