BioNotes.Org

ML, vim, biology, math, and more

Wednesday 01 March 2023

  • Elad Gil #1: Startups v Incumbents: who will capture value? published 10/25/2022
  • Elad Gil #2: Difference between LLM, image gen and between platforms/markets/open-source published 2/15/2023
  • Elad Gil #3: Transformers and LLMs published 8/30/2022
  • a16z on who owns the generative AI platform published 1/19/2023
    • We’re starting to see the very early stages of a tech stack emerge in generative artificial intelligence (AI). Hundreds of new startups are rushing into the market to develop foundation models, build AI-native apps, and stand up infrastructure/tooling.
    • In other words, the companies creating the most value — i.e. training generative AI models and applying them in new apps — haven’t captured most of it. Predicting what will happen next is much harder. But we think the key thing to understand is which parts of the stack are truly differentiated and defensible. This will have a major impact on market structure (i.e. horizontal vs. vertical company development) and the drivers of long-term value (e.g. margins and retention). So far, we’ve had a hard time finding structural defensibility anywhere in the stack, outside of traditional moats for incumbents.
    • The stack can be divided into three layers:
      • Applications that integrate generative AI models into a user-facing product, either running their own model pipelines (“end-to-end apps”) or relying on a third-party API
      • Models that power AI products, made available either as proprietary APIs or as open-source checkpoints (which, in turn, require a hosting solution)
      • Infrastructure vendors (i.e. cloud platforms and hardware manufacturers) that run training and inference workloads for generative AI models

Monday 13 March 2023

  • Simon Willison’s timeline of LLaMA developments as of 3/13/2023

Tuesday 14 March 2023

16 March 2023

20 March 2023

21 March 2023

  • Adobe launches Firefly
  • Nvidia GTC Conference announcements
  • Google BARD announcements
    • Verge on Google says Bard is not a search engine
    • YouTube video reviewing Bard v. Bing aka LaMDA v. GPT-4 from this HN submission
    • BBC article on the Bard launch
    • Main HN thread
    • NYT main article
    • NYT evaluation of Bard compared to other LLMs
    • Interesting observation. There is a “Google It” cta at the bottom of each Bard answer. Is it a convenience, an acknowledgement that Bard is not complete, or an attempt to still maintain search revenue over time?
    • Note: Bard is the overall chat product but it is powered in the back by the LLM / foundation model LaMDA
    • LaMDA was originally announced as Meena in 2020. First generation LaMDA (aka Langugage Model for Dialogue Applications) was then introduced in October 2021.
    • More history of LaMDA, formerly named Meena
      • Developed by Google Brain over many years, Meena was introduced in January 2020 with 2.6B parameters.
      • Two of the lead developers Daniel De Freitas and Noam Shazeer (2nd author on Attention is all you need paper Dec 2017), left Google in frustration b/c senior Google execs would not let Meena/LaMDA be released to public. They left and cofounded Character.ai. See additional info in this HN comment
      • First generation of LaMDA announced during Google I/O May 2021.
      • Second generation of LaMDA announced during Google I/O May 2022.
      • Summer of 2022 – Google engineering Blake Lemione claimed that LaMDA was sentient giving an interview with WIRED and hiring a lawyer for LaMDA claiming alien intelligence rights under the 13th Amendment of the US Constitution. Wiki article has a summary of objections by Gary Marcus, Yann LeCun, etc. I would claim this is a human hallucinating, rather than an LLM hallucinating.
      • November, 2022, OpenAI launched ChatGPT. The capability, ease-of-use, and popularity of ChatGPT led to a code red within Google HQ.
      • Early february, Google had a series of announcements and demos of Bard which is based on LaMDA and these did not go well. Led to $100B loss of market cap within hours.
      • This week on Tuesday March 21, 2023, Google released Bard to a small number of users in the US and UK with a waiting list for more ppl over time.
    • The NYT reported 3/21 that “more than 20 A.I. products and features” will be launched, including “a feature called Shopping Try-on and the ability to create custom background images for YouTube videos and Pixel phones.”
  • Adobe launches Sensei Generative AI Services to complement Adobe Firefly
  • MS brings OpenAI’s DALL-E to Bing
  • Google opens access to Bard
  • Nvidia announces new cloud services as part of GPT the GPU Tech Conference

22 March 2023

23 March 2023

  • From Daring Fireball link, John H. Meyer aka @beastmode has made very convincing deepfake voice versions of Steve Jobs
  • From Rodney Brooks, new essay What Will Transformers Transform?
  • Conslidated timeline of ChatGPT from Techcrunch 3/23 article
  • Klarna plugin for ChatGPT
  • Main TC article on ChatGPT plugins
    • “Easily the most intriguing plugin is OpenAI’s first-party web-browsing plugin, which allows ChatGPT to draw data from around the web to answer the various questions posed to it. (Previously, ChatGPT’s knowledge was limited to dates, events and people prior to around September 2021.) The plugin retrieves content from the web using the Bing search API and shows any websites it visited in crafting an answer, citing its sources in ChatGPT’s responses.”
    • List of early collaborators: “Expedia, FiscalNote, Instacart, Kayak, Klarna, Milo, OpenTable, Shopify, Slack, Speak, Wolfram and Zapier”
    • “Plugins are a curious addition to the timeline of ChatGPT’s development. Once limited to the information within its training data, ChatGPT is, with plugins, suddenly far more capable — and perhaps at less legal risk. Some experts accuse OpenAI of profiting from the unlicensed work on which ChatGPT was trained; ChatGPT’s dataset contains a wide variety of public websites. But plugins potentially address that issue by allowing companies to retain full control over their data.”

25 March 2023

  • HN thread about Ben Thompson’s interview with Jensen Huang. Quotes:
    • “I first spoke with Nvidia founder and CEO Jensen Huang after last March’s GTC conference, and again after last fall’s GTC; as I observe in the interview below, Nvidia’s semiannual conference frequency might seem very aggressive, but given Nvidia’s central role in AI our last talk seems like it was years ago.
    • “In this interview, conducted on the occasion of this week’s GTC, we discuss what Huang calls AI’s iPhone moment — ChatGPT — and how that has affected Nvidia’s business. We also touch on the biggest announcement from GTC — Nvidia’s new DGX Cloud service — while also discussing how Nvidia responded to the Biden administration’s export controls, TSMC’s new plant in Arizona, running AI locally, and Nvidia’s position in the stack in an LLM world.
    • List of topics: The Impact of ChatGPT
      • Nvidia’s ChatGPT Response
      • China and TSMC
      • DGX Cloud
      • The DGX Cloud Customer
      • CUDA and Commoditization
      • Centralized vs. Localized Compute

26 March 2023

  • Synopsys 2021 history and timeline – coining the term the SysMoore to refer to “SysMoore era designs converge multiple technologies in one sophisticated package, requiring a holistic analysis of the entire system. Previous methods that analyze each part of the system independently simply will not work in the SysMoore era. What is required is a hyper-convergent design flow that integrates best-in-class technology to deliver a unified analysis of the entire system.”
  • Really nice Dec 2022 history from The Computer History Museum complete with diagrams, pictures, etc.
  • History of FinFET gating strategies and bio of NTU/Berkeley prof Chenming Hu at IEEE Spectrum April 2020. Good article
  • IEEE Spectrum ultimate transistor timeline diagram of invention+commercialization of all innovations from 1950 - 2022.
  • Long IEEE Spectrum article from Dec 2022 on Transistors: the device that changed everything

Papers on ASICS, FPGAs, GPUs, and DL-optimized chips like TPUs

  • 2018 Paper on FPGAs optimized to trainCNNs for classification tasks. Good comparison on FPGAs vs. ASICs specifically in the area of deep learning (although this paper is 5 years old
  • Listing of AI chips
  • Very basic intro
  • Wiki article on the history and current debate on proper term for TPUs. “AI Accelerator??”
  • Wiki on TPUs is pretty good should read more about it.
  • Wiki on DLP Deep Learning Processor
  • See also Jensen Huang, founder of NVIDIA, on Huang’s Law

28 March 2023

  • deepfake images from Seymour https://www.theneurondaily.com/p/pope-trump-elon-go-viral
  • From HN thread Steve Yegge of the famous Google/Amazon/Grab rants wrote an LLM rant for Sourcegraph

30 March 2023

01 April 2023

02 April 2023

03 April 2023

  • Jeremy Howard’s Twitter thread about how mmap and llama.cpp are misleading re: how little memory is required. Related HN thread
  • Blinded by the speed of change Ron Miller in TC 3/26/2023 – gen ai quick growth
  • Deep Agency, Danny Postma of Headlime, an AI-powered marketing copy startup recently acquired by Jasper Techcrunch 3/27
  • Asana launches new work intelligence tools
  • MS launches Copilot for cybersecurity
  • TC article about Open Letter asking for 6 month pause
  • From Balenciaga Pope to the Great Cascadia Earthquake, new deepfakes including info about Scorsese’s made up 1973 film “Goncharov”.
  • Ads added to Microsoft Bing chat
  • AI Based cybersecurity DataDome raises $42M Series C
  • Fixie wants to make it easier to build on top of LLMs
  • Oscilar self-funded by $20M former senior engineering director at FB and co-creator of Confluent and eng leader at LinkedIn emerges from stealth to improve fraud detection with new, internal, data-pipeline optimized AI approach
  • 91 startups aka 34% of current YC class are involved in AI in some way; 54 startups aka 20% are purely in generative AI. Techcrunch Plus article from 3/30
  • Meeting Intelligence tool Read summarizes meetings into 2 minute clips
  • Timnit Gebru, Emily Bender, Angelina McMillan-Major, and Margaret Mitchell sign counter-letter against 6-month pause saying the 6-month open letter is overly focused on the ‘long-termism philophy’. TC article from 3/31
  • Ron Miller at TC writes that “Generative AI’s future in enterprise could be smaller, more focused language models”
  • Generative AI startup Narrato helps with marketing.
    • “Narrato’s main feature is a AI content assistant that helps with planning, including automatic brief generation, content creation and optimization. It also has collaboration and workflow tools and automated publishing features. Solanki explained that for both AI and non-AI content creation, users chose from templates, including blogs, articles, web copy, emails, video scripts, social media content and art. Narrato also has a chat-like format for content creation through AI, and plans to expand its selection of generative AI-assisted content templates to hundreds.”
    • “Solanki named several startups as Narrato’s indirect and direct competitors. Notion, Clickup and Airtable are used by content creators for content project management, while Jasper and Copy.ai are content creation platforms that also use AI. How Narrato wants to differentiate is by embedding generative AI into the entire marketing and content creation workflow in a single platform.”
  • Darrel Etherington “A knife so sharp you don’t feel the cut” in TC 4/03
  • Ethan Mollick on fictional HBS case generated by GPT-4 sent to my by jtlin
  • Samuel R. Bowman at NYU has a blog post and PDF about “Eight Things to Know about LLMs”. Might be a longtermism person. HN item here

04 April 2023

Created this page to track notes on product experience as an end-user and consumer

05 April 2023

  • Annie Lowery interview with Amba Kak at NY=based AI Now Institute in The Atlantic, 4/03/2023 “AI isn’t omnipotent. It’s janky.”
  • The Atlantic 4/04 by Jacob Stern on how software AI is advancing faster than Robotics. “AI is running circles around robotics”
    • ‘The counterintuitive notion that it’s harder to build artificial bodies than artificial minds is not a new one. In 1988, the computer scientist Hans Moravec observed that computers already excelled at tasks that humans tended to think of as complicated or difficult (math, chess, IQ tests) but were unable to match “the skills of a one-year-old when it comes to perception and mobility.” Six years later, the cognitive psychologist Steven Pinker offered a pithier formulation: “The main lesson of thirty-five years of AI research,” he wrote, “is that the hard problems are easy and the easy problems are hard.” This lesson is now known as “Moravec’s paradox.”’

07 April 2023

  • Nathan Lambert’s 4/05 Substack has an article on the organizational, psychological, social, competitive and social media pressures hitting folks working in AI in both academia and industry right now. Titled “Behind the Curtain: what it feels like to work in AI right now”
  • 4/07 episode of Hard Fork with guest Ezra Klein: “AI Vibe Check”.

10 April 2023

12 April 2023

  • Announcement of $10M seed round for LangChain
  • From this HN thread, discovered phind.com on. See prompt testing product page for more.
  • Greg Brockman tweet:
    • ‘The underlying spirit in many debates about the pace of AI progress—that we need to take safety very seriously and proceed with caution—is key to our mission. We spent more than 6 months testing GPT-4 and making it even safer, and built it on years of alignment research that we pursued in anticipation of models like GPT-4.
    • ‘We expect to continue to ramp our safety precautions more proactively than many of our users would like. Our general goal is for each model we ship to be our most aligned one yet, and it’s been true so far from GPT-3 (initially deployed without any special alignment), GPT-3.5 (aligned enough to be deployed in ChatGPT), and now GPT-4 (performs much better on all of our safety metrics than GPT-3.5).
    • ‘We believe (and have been saying in policy discussions with governments) that powerful training runs should be reported to governments, be accompanied by increasingly-sophisticated predictions of their capability and impact, and require best practices such as dangerous capability testing. We think governance of large-scale compute usage, safety standards, and regulation of/lesson-sharing from deployment are good ideas, but the details really matter and should adapt over time as the technology evolves. It’s also important to address the whole spectrum of risks from present-day issues (e.g. preventing misuse or self-harm, mitigating bias) to longer-term existential ones.
    • ‘Perhaps the most common theme from the long history of AI has been incorrect confident predictions from experts. One way to avoid unspotted prediction errors is for the technology in its current state to have early and frequent contact with reality as it is iteratively developed, tested, deployed, and all the while improved. And there are creative ideas people don’t often discuss which can improve the safety landscape in surprising ways — for example, it’s easy to create a continuum of incrementally-better AIs (such as by deploying subsequent checkpoints of a given training run), which presents a safety opportunity very unlike our historical approach of infrequent major model upgrades.
    • ‘The upcoming transformative technological change of AI is something that is simultaneously cause for optimism and concern — the whole range of emotions is justified and is shared by people within OpenAI, too. It’s a special opportunity and obligation for us all to be alive at this time, to have a chance to design the future together.’

15 April 2023

16 April 2023

19 April 2023

20 April 2023

  • Gen AI and finance
  • BloombergGPT. Paper, press release, news article, HN thread which has discussion about shortcomings in math and in the paper.
    • Follow up item indicating that BloombergGPT will be incorporated into Bloomberg Terminals.
  • Google Brain and Deep Mind to merge.
  • See 4/20/2023 screenshot in Apple Photos for list of gen AI tools by category
  • Check out Vector Databases more. CozoDB is a hybrid Relational-Graph-Vector Database. HN thread
    • Description from Cozo: ‘For those who are unfamiliar with the concept: vector search refers to searching through large collections of usually high-dimensional numeric vectors, with the vectors representing data points in a metric space. Vector search algorithms find vectors that are closest to a given query vector, based on some distance metric. This is useful for tasks like recommendation systems, duplicate detection, and clustering similar data points, and has recently become an extremely hot topic since large language models (LLMs) such as ChatGPT can make use of it to partially overcome their inability to make use of long context.’
    • For those who have not heard of CozoDB before: CozoDB is a general-purpose, transactional, relational database that uses Datalog for query, is embeddable but can also handle huge amounts of data and concurrency, and focuses on graph data and algorithms. It even supports time travel (timestamped assertions and retractions of facts that can be used for point-in-time query)! Follow the Tutorial if you want to learn CozoDB. The source code for CozoDB is on GitHub.

21 April 2023

22 April 2023

23 April 2023

26 April 2023

01 May 2023

02 May 2023

  • Benchmark testing on LLM performance hosted at Lightning AI. tldr – ‘GPT-3 and GPT-4 were a clear cut above the rest, but are a little harder to access given you need to pay for them and you’ll be sharing your data with OpenAI. Flan-t5 (11b) and Lit-LLaMA (7b) answered all of our questions accurately and they’re publicly available. They’ll hold up in an interrogation even though they don’t really have a sense of humor.’
  • Very short Yahoo News piece with HN thread here about IBM CEO letting 7800 jobs end by attrition and replaced by AI

04 May 2023

05 May 2023

08 May 2023

  • Designer’s critique about ‘Why Chatbots Are Not the Future’ and associated HN thread

09 May 2023

10 May 2023

13 May 2023

15 May 2023

  • ‘VCs love to talk about AI, but they aren’t writing as many checks as you might think’ – CB Insights data shows pronounced slowdown in AI investment by Ron Miller at TC
  • Blog post by Ke Fang, a Chinese ML developer about the big differences between GPT-4 and ChatGPT and how it gives individuals super powers. e.g., as a Chinese speaker, he sees the difference based on the SuperCLUE large chinese model benchmark.
    • Quote: “I have never written front-end code before, but after 48 hours of conversation with GPT-4, I built a podcast search website. The website is open-sourced with a GPT-4.0 License to express my gratitude.”
    • quote 2: “A few days later, I wanted to skip certain timestamps when watching videos on web pages. Without any experience in developing Chrome extensions, I followed the instructions by GPT-4 to create files, paste, and drag and drop, and achieved it in less than 15 minutes. I did not put it on the store, and it became a tool serving only me.”
    • HN thread
  • Seems that Anthropic’s Claude now has a 100k context which is available in the web UI?
    • HN comment:
      • ‘Claude 100k 1.3 blew me away.
      • Giving it a task of extracting a specific column of information, using just the table header column text, from a table inside a PDF, with text extracted using tesseract, no extra layers on top. (for those that haven’t tried extracting tables with OCR, it’s a non-trivial problem, and the output is a mess). 40k tokens in context, it performed at extracting the data, at 100% accuracy. * Changing the prompt to target a different column from the same table, worked perfectly as well. Changing a character in the table in the OCR context to test if it was somehow hallucinating, also accurately extracted the new data. * One of those “Jaw to the floor” moments for me.
      • Did the same task in GPT-4 (just limiting the context window to just 8k tokens), and it worked, but at ~4x more expensive, and without being able to feed it the whole document.
  • Interesting HN comment about problems with using ChatGPT to replace writers. Not working as well as planned?

16 May 2023

17 May 2023

  • Stability AI releases an open-source variant of their closed-source hosted DreamStudio called Stable Studio. HN thread

20 May 2023

26 May 2023

27 May 2023

29 May 2023

31 May 2023

07 June 2023

20 June 2023

21 June 2023

23 June 2023

27 June 2023

06 July 2023

08 July 2023

18 July 2023

19 July 2023

20 July 2023

21 July 2023

01 August 2023

02 August 2023

04 August 2023

20 August 2023

25 August 2023

08 September 2023

  • Author swyx wrote a few months ago about The Rise of the AI Engineer. HN thread and conference in SF
  • From Benedict Evans newsletter in late June / early July, found this McCann WorldGroup video about how McCann helped Bimbo Bakery–the largest provider of hamburger and hot dog buns in Mexico–use generative AI to create custom logo/branding/signage for 42,000 small vendors of hamburgers and hot dogs.

20 September 2023

04 October 2023

09 October 2023

13 October 2023

16 October 2023

13 November 2023

14 November 2023

16 November 2023

11 December 2023

07 December 2024

08 February 2024

  • ‘Google launches Gemini Ultra, its most powerful LLM yet’
    • “Google Bard is no more. Almost exactly a year after first introducing its (rushed) efforts to challenge OpenAI’s ChatGPT, the company is retiring the name and rebranding Bard as Gemini, the name of its family of foundation models. More importantly, though, it is also now launching Gemini Ultra, its most capable large language model yet. Gemini Ultra will be a paid experience, though. Google is making it available through a new $20 Google One tier (with a two-month free trial) that also includes 2TB of storage and the rest of Google One’s feature set, as well as access to Gemini in Google Workspace apps like Docs, Slides, Sheets and Meet.”

14 February 2024

20 February 2024

21 February 2024

25 February 2024

27 March 2024

  • Interesting Harper’s article on defense industry, AI, and relationship with Silicon Valley. Goes back to usage of IBM System 360 in Vietnam War all the way to Hamas attacks October 7, 2023. HN thread

23 April 2024

24 June 2024

  • Steve Yegge considers junior employees (devs, lawyers, writers, etc.) in this blog post. HN thread.
    • Says that coding assistants have made a quantum leap in last 6 weeks (circa mid-May).

26 June 2024

29 June 2024

23 July 2024

04 September 2024