Sunday Special: #Wrights #Law #Trumps #Moores #Law

Disclaimer: All views are personal and do not reflect any position of organisations that I am associated with professionally or voluntarily

For long the technology and software related works have been fascinated with the Moore’s law. With the spurt of more and more engineers and graduates alike having made software as a way of life, Moore’s law became a commonplace concept.

For the uninitiated, below is the detail of the Moore’s Law –

Moore’s Law states that the number of transistors on a microchip doubles about every two years, though the cost of computers is halved. In 1965, Gordon E. Moore, the co-founder of Intel, made this observation that became Moore’s Law. Another tenet of Moore’s Law says that the growth of microprocessors is exponential. (Source here)

With growth in Intel’s popularity, Moore’s Law also gained significant prominence, popularity and acceptance. Just that many a times acceptance was overfitting the concept into the observation.

The fundamental construct of Moore’s Law relies on the ‘time-factor’. As a result, with more advancement of technology, the time factor needs modification as technology advancement and disruptions continue to take place. The current accepted time factor is 18 months.

A contemporary and equally compelling and thought provoking concept is the Wright’s law.

What is Wright’s Law?

While studying the airplane manufacturing, Theodore Paul Wright, postulated that ‘for every doubling of airplane production the labor requirement was reduced by 10-15%.’ (Source here).

The fundamental premise of Wright’s law is that ‘we learn by doing’.

This seems to make a lot of sense and has been the foundational premise for many a thriving businesses where a productivity target is kept in sight every year. The cost factors for just getting the job done are on a constant decline. This is the reason why there is a constant call for automation and to do more with less.

The fundamental difference between the two laws is that –

‘Moore’s Law focuses on cost of production as a function of time while Wright’s Law focuses on cost of production as a function of the number of units produced.’

IEEE went ahead and published a detailed study of comparing the following –

  • Moore’s Law,
  • Wright’s Law,
  • Goddard’s Law (economies of scale),
  • Nordhaus Synthesis (Time and experience) and
  • Sinclair, Klepper and Cohen’s Synthesis (Experience and Scale).

The study also found that both Moore’s Law and Wright’s Law are applicable when the production reaches exponential growth. In that, Wright’s law gives better prediction than Moore’s law.

With the growing production of AI models and analytics products and solutions, the world of artificial intelligence and data engineering is reaching exponential growth levels very soon. While Moore’s law is no longer able to explain the growth in a temporal fashion, the Wright’s Law’s application should certainly lead to reduced costs for producing these models.

The plethora of online courses for AI and ML are helping in building general awareness of the concepts. Broader and more abundant talent supply will surely bring the cost of talent acquisition (and retention) and so the cost of production down.

This should be the Wright’s Law in play for this sector.

Sunday Special: #Deepfake #AI #BakaMitai #Dame Da Ne #NaMo #RaGa

Disclaimer: This is a personal piece with no intention to hurt anyone. Its a small experiment that was conducted within a few hours. All information has been picked up from public domain. All distortions are for testing and experimental purposes only. The intention behind this post is to demonstrate how easy it is to create a deep fake video.

One of the biggest problems that stands in front of the world with the advances of Artificial Intelligence is that of deep fakes. That is the proliferation of synthetic media.

Events and things that never happened are being created in an artificial manner.

Very simply put, deep fake is output of an algorithm that can manipulate images and videos to embed speech with great precision into a video or image format. When skilfully done, it is difficult to identify the difference in the real and the fake video. The change is so deeply done, that it embeds at the root level. Of course, the subject or the person is largely unaware of the reality or the fact that anything of this sort is available somewhere. They are completely unaware of their deep fake content.

This is very alarming and dangerous. So far, morphing and content distortion was isolated to the pornographic industry where, very disturbingly, videos were being modified and being used against women.

However, manufacturing these fake videos required a certain level of manual precision and labour which was a huge barrier.

Unfortunately, with the advent of AI, this barrier is breaking.

If you haven’t heard about Bakamitai or Dame Da ne, its all right. You can simply google and find out the rage that memes and synthetic content based on YouTube and internet in general.

The proliferation of this content made me wonder how easy or difficult is it to build such content myself. With a little research, I came across this video where the host gives very precise instructions on making fake videos, with the intention of exposing it.

Tempted with the easy with which it is done, I tried out two videos myself. I was able to make them in no time.

The first attempt was with #RaGa. Here is the output of this.

Just to keep things in right perspective, I then prepared another one with political opposition to the gentleman in the first video.

Total effort spent in this activity was less that 3 hours.

The ease with which I could create these memes underscores the greater risk of deep fake exposure that we face in our everyday lives. This was just three hours of my effort. With more effort and some coding, one can make close to real memes and videos and publish them.

With political and other motivations, such videos and capabilities pose a greater risk of knowing the truth. This is great threat and human intelligence is not enough to make out the difference in the two.

I shall spend some more time in my subsequent blog to understand the technical aspects of this process. Perhaps a deeper dive into the inner workings of the algorithms.

Irrespective, this stays something that is really bothersome.