• 0 Posts
  • 16 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle
rss









  • Your first two paragraphs seem to rail against a philosophical conclusion made by the authors by virtue of carrying out the Turing test. Something like “this is evidence of machine consciousness” for example. I don’t really get the impression that any such claim was made, or that more education in epistemology would have changed anything.

    In a world where GPT4 exists, the question of whether one person can be fooled by one chatbot in one conversation is long since uninteresting. The question of whether specific models can achieve statistically significant success is maybe a bit more compelling, not because it’s some kind of breakthrough but because it makes a generalized claim.

    Re: your edit, Turing explicitly puts forth the imitation game scenario as a practicable proxy for the question of machine intelligence, “can machines think?”. He directly argues that this scenario is indeed a reasonable proxy for that question. His argument, as he admits, is not a strongly held conviction or rigorous argument, but “recitations tending to produce belief,” insofar as they are hard to rebut, or their rebuttals tend to be flawed. The whole paper was to poke at the apparent differences between (a futuristic) machine intelligence and human intelligence. In this way, the Turing test is indeed a measure of intelligence. It’s not to say that a machine passing the test is somehow in possession of a human-like mind or has reached a significant milestone of intelligence.

    https://academic.oup.com/mind/article/LIX/236/433/986238


  • I don’t really query, but it’s good enough at code generation to be occasionally useful. If it can spit out 100 lines of code that is generally reasonable, it’s faster to adjust the generated code than to write it all from scratch. More generally, it’s good for generating responses whose content and structure are easy to verify (like a question you already know the answer to), with the value being in the time saved rather than the content itself.



  • This already exists. https://libraryofbabel.info/

    Your comment appears in page 241 of Volume 3, Shelf 4, Wall 4 of Hexagon: 0kdr8nz6w20ww0qkjaxezez7tei3yyg4453xjblflfxx0ygqgenvf1rqo7q3fskaw2trve3cihcrl6gja1bwwprudyp9hzip5jsljqlrc8b9ofmryole35cbirl79kzc9cv2bjpkd26kcdi9cxf1bbhmpmgyc0l1fxz81fsc0p878e6u2rc6dci6n0lv52ogqkvov5yokmhs3ahi89i1erq46nv7d0h3dp2ezbb1kxdz7b4k9rm9vl32glohfxmk2c4t1v5wblssk6abtzxdlhc6g00ytdyree9q4w43j8eh57j8j8d4ddrpoale93glnwoaqunj8j2uli4uqscjfwwh6xafh119s4mwkdxk5trcqhv7wlcphfmvkx97i5k54dntoyrogo51n5i23lsms7xmdkoznop6nbsphpbi0hpm6mq3tuzy1qb677yrk832anjas7jybzxvuhgox49bhi21xhvfu0ny27888wv76hbtpkfyv4s57ljmn9sinju3iuc6na2stn9qvm1vo5yb9ktz1lcbjp0q9102ugpft1f7ngdzmnzv6qomn7zfnopn02v9wwe2gr2m6mo0o9vjmrvmd7fp4kjivsy6iu9cfz9dyu6gv7542ujz0vtj7m2ifpnfeezrb2gbwbgkbdx2taq7vlgjedqze22ywsyt1cacfxxpftjumke4vbtvmn6skj3mi5qnprrv9w4pq5t23xlvrxufsmri2uljpw72228q6jvh82e6936400czpzs8w6i25dvgk7vgj9o9r3k4nombsl3iiv1cogggcw9e5non0jn9ni1aacbisa1oqlzgi9qyhmmd67hkxsfri5958cyj6ryou5vgz7uc7j9kkjix420ys1tkcrhgf0jm6la9h7e06z7sdijeiw31junshzgvmmpplqw6qbzzqzs39jictbygt8u6704h48hsc7hlffm513zagtdbfvpbz32r0vqmjz2sudta4gfsnx6ac4h76djsh7th2h4265qeeainsx2xgslfst5namazisk6swvsbpcv3osvr7wiabkh61f5vuxymqadzxilggym3kfqtbdl3xsmwqcr6wuf4gpoviog89h5xfyawlh7k79k9j5fn1wq47f1m73lah1qhfdxyt1pv2biean3jjb8qv0raxz1oxi9zs9tnmmwrhccd9fij39ddgn1g7t03norvzjqcbqbzl9ibq9qrksnutbfc47z2727u9d9tp68z7u2hyb805wy3d4d0ia9q50p4xvevryycogr6212tau2iv3ya0fy09rsh0wilpqj8vxqug9zj7h2ya1hbnapmqecwtbjnetmt2t91mhb7hky32tl3fa5dqtuba2hm5faawvkazugbmngeojzw88p6hl0duaiup166r20tubj16x78c5m73rwpecco5w6z8ti8b2pgof8k8vu99jvyqiaq6c6adybtjwi8i6u95efp1hxpdsvtbo6nm7j1lmv9jzdspp0sd3qk5jmfpfs1cy18euwpk0apiuqqdy7hakfjx83nx920p4ptxu6bsl98iywgdydrr54u6nvmyxwg1hd2vxnu2yq3utvbx5z886ezzblw0izmntai8jstisdju4n12eed5yr1avv9k7mrr9fzqs6zo7uc3ixkv6fz2figpb5tr1obrlf4c30ghf8exsdgwn1e0uo76r3klnqfys51extpnq5v5swql36lirgok8frxnntoywaqtmyzm3vclnnvfqohz56hh823k5d50049f3lye9cil24yk5031u27dpi7895319mkyi2pkpwxgay4fnqfj38emdc0990ezpheam8ab132v588je7ur4dv1wazezyokate2rccnii1y4gy4subra7y5of25xxu8s3mjumal1lypu1360mrzmqdqdfsm9lvmewzg1608hlxx4le00jhcowg7xcxsbhbx4swwze9pkyk0x8vpsr5j1yja5jl0wn8mjh3gh9wvszd2tazvj7fbuym2pee0r0ifsky61fulwloxc5jkon63tvarj5jsxg3kghl44e7o1w2deeboaodjpuvgzg82wrxszd5jk0hwhvaopdb8wcqqopob4mj2pn36yhnw6k6sz5y56xlijr5a4s6xmc68r0c20d7zq53mrjbq281nfkrrtqgnv2i2rag3f6ara9t616vovgqn8kjwrivene6yyskzxb0d4b0qy0aq743dptvxr85sfhqbcevkn17rvqv1l9hzx983cngckwhdu15kzdv10mqf8yibu0q8s3khd8d3fi5lbl9yespks1q1tnc4y1bgjtvyf1oppnxhvpou71olv0yapyq46w9ld0ntigpba6equ55fvs0j1tp7qw1hr8gbjz10gwxhsx3lu0hubgukht7mbkwfsu4x9980z1srfhj2ayw2e2xf2627vgctymfbooy4eythnm8nzr4mqnfycwovjvbyg95luo4h3smka9d3jlr7dn9e3xwwkpl4dg8i1mj7g63ludud8q7chfh4xajosfaps2n6ntye7j8o4lrcsbqas8ayiutyq8ckbn67ejioufkowogubs8o5670nz13bb9gq3obf0y9xq60j8n8d6i8ahzhxlj2rfc7ndsfmzhusihkiz9fdovslzad7in5kldzhqk8z0cua8n8l0vjfsy96qytgz4wgkq41h6rrsegy9yg1fqhnavpltd067gicomzhye6czk4voghysqscrwavw3li9qdj0ikrlwumyymf8n5luhz1orsrxfw1rek6ghsqyu486dfp2hkbilyccquihck0269nu8y7bsha503ax2ecpxjiug54viy229k4ienp6lcnyx03mnpadeslwa87mu6tcb7t3c3ug7g0yf5le9v2hp094n60ipetkyfu21vqxah8sjjmuhk1gzxnmz01o1s9ndefpfcat0vn1x1anypagcboxp515nmnj9f2yol1opdytfx2dmy5ypdpyamsp2p3zsegmd15e1jbo6xsznda92oqxo9kvsww9k1kzsuwjl73drq038uls6izgqzmhry879ctrhryaj750b2s3hus9f1ainad3vzphmataq0lkn7bi62pu0xf4uqb5k2o8656zn6vzilgl653t38y12723v193fe5c7vbv6p5lw5ernj7bl4aev1ccyakxmkwl11hot51pvrsvqd8vdptfq2ezq2jjaebwx




  • Your broader point would be stronger if it weren’t framed around what seems like a misunderstanding of modern AI. To be clear, you don’t need to believe that AI is “just” a “coded algorithm” to believe it’s wrong for humans to exploit other humans with it. But to say that modern AI is “just an advanced algorithm” is technically correct in exactly the same way that a blender is “just a deterministic shuffling algorithm.” We understand that the blender chops up food by spinning a blade, and we understand that it turns solid food into liquid. The precise way in which it rearranges the matter of the food is both incomprehensible and irrelevant. In the same way, we understand the basic algorithms of model training and evaluation, and we understand the basic domain task that a model performs. The “rules” governing this behavior at a fine level are incomprehensible and irrelevant-- and certainly not dictated by humans. They are an emergent property of a simple algorithm applied to billions-to-trillions of numerical parameters, in which all the interesting behavior is encoded in some incomprehensible way.