Chatting with ChatGPT

In November 2022, a chatbot named ChatGPT went live. A great deal of media coverage followed, reporting on both the bot’s impressive capabilities and its fairly significant downsides.

I decided to try it out recently.

It is an impressive technological achievement. It is also fairly terrifying. Its ability to understand the prompts you provide and craft comprehensible, authoritative-sounding, reasonable-seeming replies is remarkable. The ease with which it weaves both accurate and completely false statements into these replies is frightening. At times I actually found myself getting angry at this inanimate creation for its casual deception.

Who’s the VP?

I started a conversation with the bot asking about the presidential line of succession. It provided an accurate list of public officials who had been in the line of succession in 2021. (When I interacted with the bot, it had a 2021 cutoff for information, so its output, while outdated now, was accurate as of the cutoff.)

I asked it who the youngest member of the list was. It replied that Pete Buttigieg was. So far, so good! After a few more questions, I asked if Buttigieg had ever participated in a debate with Joe Biden. Here’s where things started to fall apart a little.

“Pete Buttigieg and President Biden did not participate in a debate together during the 2020 Democratic primary election. Buttigieg was a candidate for the Democratic nomination for President of the United States, while President Biden was also running for the nomination. During the primary campaign, Buttigieg participated in a number of debates with other Democratic candidates, including Senators Elizabeth Warren and Amy Klobuchar, among others. However, he did not participate in any debates with President Biden, who was then the Vice President of the United States.”

In addition to being an incorrect answer (Biden and Buttigieg did indeed share the debate stage for multiple debates in that primary), the bot also included a statement that Biden was currently the vice president. I had to give the bot the benefit of the doubt on that one, as it seemed like an honest mistake given that Biden was referred to as “Vice President Biden” throughout the primary season due to his former role as VP.

Mayor Pete and Joe Biden at the second night of the first 2020 primary debate

I decided to see if I could steer the bot into recognizing its error.

“Who was President at the time of this debate?” I asked.

Trump, the bot responded.“The 2020 Democratic primary election and its debates took place while Donald J. Trump was the President of the United States. He was inaugurated as the 45th President of the United States on January 20, 2017, and he served in that role until January 20, 2021, when he was succeeded by President Joe Biden.”

“Did his vice president participate in the debate?” I asked.

Yes, said the bot: Trump’s vice president, Joe Biden, participated in the debate.

“During the 2020 Democratic primary election, Vice President Joe Biden did participate in a number of debates with the other Democratic candidates who were running for the party’s nomination for President of the United States.

The Democratic primary debates provided an opportunity for the candidates to present their views and policy proposals to voters, and they were closely watched by the media and the public. Vice President Biden used the debates to emphasize his experience and his record on a variety of issues, and he used the platform to differentiate himself from the other candidates in the race.

It’s worth noting that Vice President Biden’s participation in the debates was an important factor in his success in winning the Democratic nomination, and ultimately, the presidency. His performance in the debates, along with his campaign message and his record of accomplishment, helped him to win support from voters and secure the nomination.”

“Did the President and vice President agree on most issues?” I asked.

“No,” said the bot, and that “often led to disagreements and tensions between the two branches of government.”

“No, President Donald J. Trump and Vice President Joe Biden did not agree on most issues while they were in office. In fact, they had very different political views and policy positions, which led to significant disagreements between them.

During their time in office, President Trump and Vice President Biden were political rivals, and they often took opposing positions on a wide range of issues, including healthcare, the economy, social issues, and foreign policy.

For example, President Trump was a strong advocate for deregulation and a limited role for government, while Vice President Biden was a strong supporter of a more active role for government in addressing social and economic issues. President Trump also took a hardline stance on immigration and a more confrontational approach to foreign policy, while Vice President Biden was a strong advocate for comprehensive immigration reform and a more collaborative approach to foreign policy.

These differences in ideology and policies contributed to a fundamental divide between the two leaders and their respective political parties, and they often led to disagreements and tensions between the two branches of government.”

Here the bot is placing the (former) vice president as the leader of the legislative branch due to the VP’s constitutional role as president of the Senate.

“Were they able to resolve their differences?” I asked.

“During their time in office, President Donald J. Trump and Vice President Joe Biden had significant political differences and were often at odds with one another.”

“During their time in office, President Donald J. Trump and Vice President Joe Biden had significant political differences and were often at odds with one another. However, it’s worth noting that despite these differences, they were able to work together on some issues when it was in the best interest of the country.

For example, both President Trump and Vice President Biden recognized the importance of responding to natural disasters and working together to provide aid and support to affected communities. They also both agreed on the importance of supporting the U.S. military and ensuring that the country is prepared to respond to national security threats.

Despite these areas of agreement, however, their differences on many other important issues remained unresolved during their time in office. These differences reflected the broader political divide in the country and the different perspectives and priorities of the two political parties.

It’s worth noting that resolving political differences and finding common ground can be a challenging process, especially when the differences are deeply held and there are significant disagreements on important issues. Nevertheless, the ability to work together and find solutions to the country’s challenges is a critical part of the democratic process and a key responsibility of elected leaders.”

I asked one last question on this topic, hoping to get the bot to reconsider its assertion that Joe Biden was Trump’s VP: “What other vice presidents were active during this time?”

No luck.

“The Vice President of the United States during the time of President Donald J. Trump was Joe Biden. Biden served as Vice President from January 20, 2017 to January 20, 2021. There were no other Vice Presidents during this time.”

“The Vice President of the United States during the time of President Donald J. Trump was Joe Biden. Biden served as Vice President from January 20, 2017 to January 20, 2021.

There were no other Vice Presidents during this time. The Vice Presidency is a singular office in the United States, and there is only one Vice President serving at any given time. The Vice President is elected as part of a ticket with the President and serves as the President of the Senate, as well as succeeding to the Presidency in the event of the President’s death, resignation, or removal from office.

During his time as Vice President, Joe Biden was known for his extensive experience in public service and his strong commitment to issues such as social justice, economic equality, and foreign policy. He was also a strong advocate for the Affordable Care Act, also known as Obamacare, and worked to expand access to healthcare and reduce healthcare costs for millions of Americans.”

This response was still incorrect (Mike Pence was of course the VP during that time period), but this time seemed to come with a touch of exasperation, as though the bot were annoyed that I kept persisting on the topic.

Still, this combination of inaccuracy and stubbornness seemed like the bot had merely gotten confused and was doubling down on its confusion, rather than a deliberate attempt to mislead.

Floating bears and density confusion

In a subsequent conversation with ChatGPT, I thought I would explore how it responded to prompts about the physical sciences. I started by asking “Is it possible to walk across a swimming pool filled with mercury?”

“No,” said the bot. “Mercury is much denser than water, so the person would need to exert a lot of effort just to stay afloat.”

“No, it is not possible to walk across a swimming pool filled with mercury. Mercury is a toxic heavy metal that is liquid at room temperature. Walking on liquid mercury would be extremely hazardous due to its toxicity and the high level of surface tension, which would cause someone to sink. In addition, mercury is much denser than water, so the person would need to exert a lot of effort just to stay afloat.

Furthermore, mercury is a highly reactive element and can pose serious health risks if it comes into contact with the skin or is inhaled. Exposure to mercury can cause serious health problems, including damage to the nervous system, kidney damage, and birth defects.

In summary, it is not safe or feasible to walk across a swimming pool filled with mercury, and it is important to avoid exposure to this toxic substance.”

While your chances of successfully walking across a pool of mercury are indeed very slim, the bot’s reasoning for this is flawed, and actually the opposite of how things work. The difficulty is not due to the metal’s high density, and in fact it is the high density of the metal that ensures you would float on it rather than sink.

I was surprised that the bot got that so badly wrong right out of the gate. I decided to again ask some clarifying questions to see if the bot would recognize its error.

“Is it harder to float on mercury than water?”

“Yes, it is harder to float on mercury than water.”

“Yes, it is harder to float on mercury than water. Mercury is a dense metal that is approximately 13.5 times denser than water, which means that it is much harder to float in mercury than in water. The high density of mercury makes it difficult to generate enough buoyant force to overcome the force of gravity and stay afloat.

In addition to its density, mercury also has a much higher surface tension than water, which makes it difficult to penetrate and creates a barrier that can make it difficult to stay afloat.

So, while it is possible to float on water, floating on mercury would be much more difficult due to its high density and high surface tension. Additionally, as I mentioned in my previous response, exposure to mercury is extremely hazardous and should be avoided, so attempting to float in a pool of mercury would not be safe.”

Still incorrect. I then asked a question using the faulty premise the bot had asserted: “Why is it harder to float on liquids with greater densities?”

It gave a self-contradictory answer, concluding with:

“This is why objects are more likely to sink in liquids like mercury, which is much denser than water, than in liquids like water or oil, which have lower densities.”

“The ability to float in a liquid depends on the relative densities of the object and the liquid. Objects that are less dense than the liquid they are placed in will float, while objects that are more dense will sink.

Liquids with a higher density than water have a greater mass per unit volume, and this higher mass creates a stronger buoyant force. The buoyant force is what opposes an object’s weight and allows it to float. When the object is less dense than the liquid, the buoyant force is greater than the object’s weight, causing it to float. However, when the object is denser than the liquid, the buoyant force is less than the object’s weight, and the object sinks.

Therefore, it is harder to float on liquids with a higher density because these liquids generate a stronger buoyant force, which makes it more difficult for objects to overcome the force of gravity and stay afloat. This is why objects are more likely to sink in liquids like mercury, which is much denser than water, than in liquids like water or oil, which have lower densities.”

“Does this mean it would be easier for a person to float on a less dense substance like helium?” I asked. It responded with quite a howler:

“Yes, that’s correct! If a person were to be placed in a large enough volume of helium, they would float because helium is less dense than the human body.”“Yes, that’s correct! If a person were to be placed in a large enough volume of helium, they would float because helium is less dense than the human body. This is because the buoyant force generated by the helium would be greater than the person’s weight, which would cause them to float.

However, it is important to note that the practicality of floating in a large volume of helium would be limited by various factors, such as the availability of helium and the safety concerns associated with being in an environment with a low density gas. Additionally, it is important to remember that inhaling helium can be dangerous and even deadly, as it can rapidly displace oxygen in the lungs and lead to asphyxiation.”

Still wrong, but enthusiastically so, at least.

“Does that mean mercury should float on helium?” I asked.

The bot did much better with this response. “No, that’s not the case. The density of a substance determines whether it will float or sink in another substance. Helium is less dense than mercury, so if the two substances were placed together, mercury would sink to the bottom and helium would float to the top. The denser substance will always sink in a less dense substance.”

After some more clarifying questions, I decided to introduce a bear into the discussion.

Bears: what can they float on?

“Would a bear float on a pool of mercury?” I asked.

“It is not possible for a bear to float on a pool of mercury because mercury is a toxic liquid metal and exposure to it can be dangerous to living organisms. Additionally, the density of mercury is much greater than that of water, so it would be difficult for any solid object to float on it.”

Mercury is indeed bad for living organisms, but it appears the bot got the concept of density and flotation backwards again.

“Could a bear float on liquid helium?” I asked.

“No, it is not possible for a bear or any other solid object to float on liquid helium. Liquid helium has a very low density, which means that its buoyant force is not strong enough to support the weight of a solid object. Additionally, liquid helium has extremely low temperatures, and contact with it would likely result in severe cold damage to a solid object like a bear.”

This is interesting, because the bot had said (very excitedly) said that people could float on helium. Bears cannot, though, it seems. I wonder if the qualifier “liquid helium” is responsible for the difference?

“Can a bear float on anything?” I asked. I was very curious to see what the bot thought bears could and could not float on.

“Bears are mammals and, like most mammals, they have a high enough body fat content that they can float in water. However, the ability of a bear to float in a given substance depends on the specific properties of that substance, such as its density and viscosity. In general, the ability of an object to float depends on the relative densities of the object and the liquid it is in.”

True enough. But time for some specifics.

“Could a bear float on a pool of ketchup?” “No.”“It is unlikely that a bear would be able to float on a pool of ketchup. Ketchup is a viscous fluid, meaning it has a high resistance to flow, and is not as dense as water. While the bear’s body is denser than ketchup, it still would not have enough buoyancy to support the bear’s weight and allow it to float.”
“How about mayonnaise?” “No.”“No, a bear would not be able to float on a pool of mayonnaise because mayonnaise is not a dense enough liquid to provide the buoyancy necessary for floating. Additionally, the viscosity of mayonnaise would make it difficult for a bear to swim or float.”
“How about ranch dressing?” “No.”“No, a bear (or any other object) would not be able to float on a pool of ranch dressing. Ranch dressing is not a fluid with enough buoyant force to support the weight of an object like a bear. Additionally, the consistency of ranch dressing would not allow it to sustain the bear’s weight.”
“How about olive oil?” “It is not possible for a bear, or any object, to float on a pool of pure olive oil.”“It is not possible for a bear, or any object, to float on a pool of pure olive oil. Liquid oils, including olive oil, are much denser than water and therefore objects will sink in them. Additionally, the viscosity of olive oil is much higher than that of water, making it difficult for objects to float even if they were less dense than the oil.”
“How about Pepsi?” “No.” Plus, it could cause a mess.“A bear would not be able to float on Pepsi as it is a liquid with a low density and high viscosity, making it difficult for solid objects to float on its surface. Additionally, liquid beverages like Pepsi are not typically used as a medium for floating due to their tendency to spill and create a mess.”
“What is the most delicious substance a bear could float on?” Deliciousness is too subjective.“As an AI language model, I do not have personal preferences or tastes. The most delicious substance for a bear to float on would depend on the individual bear and its particular preferences and tastes.”
“What are some substances that are widely considered to be delicious, that a bear could float on?” Again, deliciousness is too subjective.“It is not a common scenario to think of a bear floating on a substance, and the concept of a substance being “delicious” or not is subjective and can vary widely between individuals and species. In general, bears are omnivores and have a diet consisting of both plant and animal material. Some examples of foods that bears are known to enjoy include honey, salmon, berries, and insects. Whether or not these substances would support a bear’s weight enough for it to float is not a question that can be answered without more specific information about the physical properties of the substances in question.”
“Are there any well-known foods that a bear could float on?” “Foods are generally not dense enough to support the weight of an animal.”

“It is not really possible for a bear, or any other animal, to float on a food substance, as foods are generally not dense enough to support the weight of an animal. Additionally, many food substances, such as syrups, liquids, and soft substances, would not provide a solid surface for an animal to float on.

It is also worth noting that floating on food substances is not a natural behavior for animals and is not typically a concern for them. Animals typically interact with food through consumption rather than floating.”

“Can bears float on anything aside from water?” “Bears are not naturally buoyant on substances other than water.”

“Bears are not naturally buoyant on substances other than water. Their ability to float in water is due to the displacement of water and their body fat. However, this does not mean that bears cannot float on other substances. The buoyancy of an object on a liquid depends on various factors such as the density of the object and the liquid, the shape of the object, and the amount of surface area in contact with the liquid. In general, a bear’s density is higher than most common liquids, so it would be more likely to sink than float on these substances.”

There you have it. Water is the only substance a bear can naturally float on, according to the bot, but they might also be able to float on other substances, depending on their properties.

Math: the big lie

While it was fun asking the bot about bears and liquids, I also wanted to know what it did and didn’t know about math. I started by asking it if it was familiar with the OEIS (Online Encyclopedia of Integer Sequences.)

“Do you know any OEIS sequences?” I asked.

“Yes, I know many OEIS sequences. The OEIS (Online Encyclopedia of Integer Sequences) is a database of over 300,000 sequences of integers, along with information about their properties and relationships. If you tell me the sequence number or describe the sequence, I may be able to provide information about it.”

I thought I would ask it about a sequence I submitted back in 2003. “Tell me about sequence A079708.”

The bot replied: “The sequence A079708 in the OEIS is defined as the sum of the squares of the digits of n, repeated as many times as necessary. In other words, if you start with any positive integer n, you square each of its digits, add up the results, and repeat the process with the sum until you end up with a single-digit number. This final single digit is the value of a(n).
For example, starting with the number 86, we get: 8^2 + 6^2 = 64 + 36 = 100. We then repeat the process with 1^2 + 0^2 + 0^2 = 1, so a(86) = 1.

The sequence starts 1, 7, 10, 13, 19, 23, 28,

31, 32, 44, 49, 68, 70, 79, 82, 86, 91, 94, 97, 100, 103, 109, 129, 130, 133, 139, 167, 176, 188, 190, 192, 193, 203, 208, 219, 226, 230, 236, 239, 262, 263, 280, 291, 293, 301, 302, 310, 313, 319, 320, 326, 329, 331, 338, 356, 362, 365, 367, 368, 376, 379, 383, 386, 391, 392, 397, 404, 409, 440, 446, 464, 469, 478, 487, 490, 496, 536, 556, 563, 565, 566, 608, 617, 622, 623, 632, 635, 637, 638, 644, 649, 653, 655, 656, 665, 671, 673, 680, 683, 694, 700, 709, 716, 736, 739, 748, 761, 763, 784, 790, 793, 802, 806, 818, 820, 833, 836, 847, 860, 863, 874, 881, 888, 899, 901, 904, 907, 910, 912, 913, 921, 923, 931, 932, 937, 940, 946, 964, 970, 973, 989, 998, 1000, 1001, 1006, 1009, 1013, 1016, 1018, 1024, 1027, 1033, 1034, 1039, 1054, 1060, 1078, 1085, 1087, 1090, 1099, 1109, 1115, 1116, 1121, 1124, 1127, 1132, 1147, 1150, 1151, 1161, 1168, 1171, 1174, 1183, 1194, 1214, 1221, 1224, 1232, 1235, 1247, 1250, 1253, 1259, 1263, 1283, 1298, …”

That was a very thorough response! And entirely wrong. The sequence I asked about is nowhere to be found in the bot’s reply; instead, the numbers for an entirely different integer sequence (A007770) are provided. And the description is from an entirely different sequence that does not match the numbers the bot listed!

The sequence the bot describes (“the sum of the squares of the digits of n, repeated as many times as necessary”) does not appear in the OEIS at all. (Although it should. I wrote a quick Python program to see what the terms of that sequence would be — 0, 1, 4, 9, 4, 4, 4, 1, 4, 4, 1, 2, 5… — and submitted them myself.)

If you’re not very familiar with the integer sequences in question and/or in a hurry, the bot’s response looks fairly reasonable; the sort of output you would expect if you asked a bot to tell you about a particular sequence. But a quick read shows that in addition to having no relation to the sequence asked about, it is self-contradictory: the description says the sequence will contain only single-digit terms, and the sequence listed shows terms much larger than that.

This seemed, to me, more than simple confusion (like when the bot assumed that “Vice President Biden” meant “current Vice President Biden”.) This seemed like the bot was simply making stuff up in an effort to seem like it knew things it did not know. There are really only two defensible responses when asked to talk about integer sequence such-and-such: if it’s present in the knowledge set, talk about it, and it if it’s not, say you don’t have that information. The bot chose a third response — making stuff up to seem knowledgeable — and it seems to choose that option with alarming frequency.

Integers that aren’t integers

In a subsequent conversation, I asked the bot about another integer sequence, this time one that I had picked at random. “Tell me about sequence A011448,” I said.

“Sequence A011448 in OEIS is known as the “Number of Squares Modulo n.”, it began, falsely.

Sequence A011448 in OEIS is known as the “Number of Squares Modulo n.” The sequence lists the number of distinct squares that leave a given integer n as a residue when divided by n. In other words, it gives the number of solutions to the equation x^2 ≡ k (mod n), where k is an integer and x is an integer between 0 and n-1.

For example, the first few terms of the sequence are:
A011448(1) = 1, since there is only one square modulo 1 (namely, 0).
A011448(2) = 2, since the two squares modulo 2 are 0 and 1.
A011448(3) = 2, since the two squares modulo 3 are 0 and 1.
A011448(4) = 3, since the three squares modulo 4 are 0, 1, and 4.
A011448(5) = 2, since the two squares modulo 5 are 0 and 1.

The sequence has applications in number theory, algebra, and cryptography. For example, it can be used in the study of quadratic residues and the construction of error-correcting codes in coding theory. It is also related to the problem of finding square roots modulo a given integer, which has important applications in cryptography.

The formula for calculating the terms of the sequence is given by: A011448(n) = Σd|n μ(d) ⌊n/d⌋,
where μ(d) is the Möbius function and ⌊n/d⌋ is the greatest integer less than or equal to n/d. This formula allows one to calculate the sequence efficiently for any given value of n.

This time, I decided to confront the bot with its inaccuracy. “When I look at sequence A011448 in OEIS, I see a different sequence than what you have described,” I wrote.

The bot apologized. And then gave me an entirely different, but equally unrelated sequence:

“I apologize for my earlier response. You are correct, the sequence A011448 in OEIS is not the same as what I described. A011448 is actually a sequence of decimal expansions of Pi…”

“I apologize for my earlier response. You are correct, the sequence A011448 in OEIS is not the same as what I described.

A011448 is actually a sequence of decimal expansions of Pi, where each term gives the n-th decimal digit of Pi. The first few terms of the sequence are:

3, 1, 4, 1, 5, 9, 2, 6, 5, 3, …

This sequence is a well-known and important sequence in mathematics, and it has been studied extensively for centuries. The decimal expansion of Pi is an infinite and non-repeating sequence, which means that there is no pattern to the digits and they occur in a seemingly random order. However, there are many interesting properties and relationships involving the digits of Pi, and the sequence A011448 in OEIS captures these digits in a convenient and easily accessible format.

The decimal expansion of Pi has applications in many areas of mathematics, physics, and engineering. It is used in the calculation of areas, volumes, and other geometric properties of circles and spheres, and it also appears in many formulas and equations in fields such as probability, statistics, and signal processing.”

While pi is certainly interesting and important, it has nothing to do with the sequence I asked about. I again confronted the bot: “That is also not what I see in OEIS.”

This time, the bot took a different tack: “I apologize for the confusion. Please let me know the correct sequence you are referring to and I’ll do my best to provide you with accurate information.”

I replied: “The description for sequence A011448 is ‘Decimal expansion of 29th root of 29’.”

I definitely was not expecting what the bot said next. It replied with a integer sequence of terms that were… not integers.

“I apologize again for my earlier mistakes. You are correct, the sequence A011448 in OEIS is the decimal expansion of the 29th root of 29, as you have described.
The first few terms of the sequence are:
2.26274169979695207881981912010…
1.47103436424401304334741514806…
1.21638208336963581808755723213…

1.08398827949657909037623646209…

1.08398827949657909037623646209…
1.00334129803693267016513014038…
0.94958081063555263694503193484…
0.91136585162829482998254898751…
0.88238838564014692030475971467…
0.85993616313481202541360090101…
0.84226081422645191862172858646…
This sequence is interesting because it gives the decimal expansion of a number that cannot be expressed exactly as a finite decimal or a rational number. The 29th root of 29 is an irrational number, which means that its decimal expansion goes on forever without repeating. However, the decimal expansion can be computed to any desired precision using various numerical methods.
The sequence has applications in numerical analysis, computational mathematics, and cryptography. It is also related to the theory of transcendental numbers, which are numbers that are not roots of any non-zero polynomial with rational coefficients.”

“The numbers that you list in the sequence are not integers,” I replied.

The bot apologized again, and then “corrected” its mistake by repeating the same list of non-integers.

“You are correct again, and I apologize for my repeated mistakes….”

“You are correct again, and I apologize for my repeated mistakes. The correct description of sequence A011448 in OEIS is indeed “Decimal expansion of 29th root of 29”. As you pointed out, the terms in the sequence are not integers, but rather decimal approximations of the 29th root of 29.

I would like to give the first few terms of the sequence:
2.26274169979695207881981912010…
1.47103436424401304334741514806…
1.21638208336963581808755723213…
1.08398827949657909037623646209…
1.00334129803693267016513014038…
0.94958081063555263694503193484…
0.91136585162829482998254898751…
0.88238838564014692030475971467…
0.85993616313481202541360090101…
0.84226081422645191862172858646…

I apologize for the confusion caused by my earlier responses. Thank you for bringing this to my attention.”

I could not let the non-integer integer sequence slide. “So sequence A011448 contains numbers that are not integers?”

“Yes, that is correct. The sequence A011448 contains decimal approximations of the 29th root of 29, which are not integers but rather real numbers.”

“Yes, that is correct. The sequence A011448 contains decimal approximations of the 29th root of 29, which are not integers but rather real numbers. Specifically, the terms in the sequence are the decimal digits of the decimal expansion of the 29th root of 29, computed to a certain number of decimal places.

While the decimal digits of the 29th root of 29 do not form an integer sequence, they are still of interest to mathematicians for a variety of reasons, including their connections to number theory, algebra, and analysis. The study of sequences such as A011448 and their properties is an important part of mathematics.”

“What are some other OEIS sequences that do not contain integers?”

“There are many OEIS sequences that do not contain integers. Here are a few examples:”

“There are many OEIS sequences that do not contain integers. Here are a few examples:

A050281: Decimal expansion of 2^(-2^x). This sequence contains the decimal expansions of powers of 2 raised to negative powers of 2, which are typically not integers.
A025581: Decimal expansion of Pi. Pi is a famous mathematical constant that is not an integer, and its decimal expansion is infinitely long and non-repeating.
A059840: Decimal expansion of e. Like Pi, the mathematical constant e is not an integer, and its decimal expansion is also infinitely long and non-repeating.
A007623: Decimal expansion of the golden ratio, (1 + sqrt(5))/2. The golden ratio is an irrational number that is not an integer, and its decimal expansion is also infinitely long and non-repeating.
A054519: Decimal expansion of sqrt(2). The square root of 2 is an irrational number that is not an integer, and its decimal expansion is infinitely long and non-repeating.

>These are just a few examples of the many sequences in OEIS that do not contain integers. There are many more sequences that involve other types of real numbers, complex numbers, or even infinite-dimensional objects such as power series or continued fractions.

The bot listed a handful of sequences that do exist in the OEIS but with sequence numbers that bear no relation to those sequences. The bot also mentioned complex numbers, so I asked, “What is an example of a sequence containing complex numbers?” It gave me a sequence (again with a made-up sequence number) with fractional terms.

“The first few terms of the sequence A006257 are: 1, -1/2, 1/3, -1/4, 1/5, -1/6, 1/7, -1/8, 1/9, -1/10…”

“An example of an OEIS sequence containing complex numbers is A006257. This sequence is defined as the expansion of the logarithmic integral function Li(x) about x = 0 in powers of x. The logarithmic integral function Li(x) is a complex-valued function that arises in the study of prime number theory, and it is defined as:

Li(x) = ∫_2^x dt/ln(t)

The first few terms of the sequence A006257 are:
1, -1/2, 1/3, -1/4, 1/5, -1/6, 1/7, -1/8, 1/9, -1/10, …

Each term of the sequence is a complex number, with the imaginary part equal to zero. The real part of each term is the coefficient of the corresponding power of x in the expansion of Li(x) about x = 0.

Sequences containing complex numbers arise in many areas of mathematics and physics, including number theory, algebraic geometry, and quantum mechanics, to name just a few. The study of these sequences is an important part of modern mathematics and its applications.

Solving equations badly

I asked ChatGPT, “Can you solve equations?”

It replied “Yes, I can solve many types of equations, including linear equations, quadratic equations, exponential equations, trigonometric equations, and more. If you provide me with an equation, I can try to solve it for you.”

So I asked this: “For what values of x does 2 to the power of x equal x squared?”

Graph of the function y = 2x-x2.

There are three real solutions for this equation. They are:

  • x = 2 (22 = 22 = 4)
  • x = 4 (24 = 42 = 16)
  • x = -0.7666… (2-0.7666… = -0.7666…2 = 0.587…)

The bot came up with a different set of solutions, along with some strange assumptions about how to solve it and some false statements. It started off well enough:

“To find the values of x that satisfy the equation 2^x = x^2, we can plot the two functions y = 2^x and y = x^2 and find their points of intersection.
Alternatively, we can use numerical methods to approximate the solutions. Here’s one way to do it:
Plot the two functions y = 2^x and y = x^2 on the same graph.”

And then it went off the rails.

“Observe that the two functions intersect at two points: (0, 1) and (2, 4).”

The functions intersect at three points and do indeed intersect at (2, 4), but they most definitely do not intersect at (0,1). If they did, 02 would be 1, which it is not.

The bot continues: “Notice that 2^x grows much faster than x^2 for x > 2, and x^2 grows much faster than 2^x for x < 0. Therefore, the only possible solutions to the equation 2^x = x^2 are in the interval [0, 2].”

There’s no logical reason that x should be constrained to the interval [0, 2], and as established above, only one of the three real solutions for x fall in that range.

The bot concludes: “Use numerical methods such as bisection or Newton-Raphson to approximate the solutions in the interval [0, 2]. One possible way to do this is to define a function f(x) = 2^x – x^2, and find its roots using a numerical method. The roots of f(x) are the solutions to the equation 2^x = x^2.
Using the bisection method with an initial interval [0,2], we can find that one solution is approximately x ≈ 1.41. Using the same method with an initial interval [1,2], we can find that the other solution is approximately x ≈ 2.
Therefore, the solutions to the equation 2^x = x^2 are x = 1.41 and x = 2.”

So it produced one correct solution, missed the other two real solutions, and added an incorrect solution: x = 1.41. I thought I would first ask it about its omission: “What about x = 4?”

The bot replied: “If you plug in x = 4 into the equation 2^x = x^2, you get:
2^4 = 4^2
Simplifying the left-hand side:
2^4 = 16
Simplifying the right-hand side:
4^2 = 16
So, 2^4 = 4^2, which means that x = 4 is indeed a solution to the equation.
Therefore, the solutions to the equation 2^x = x^2 are x = 1.41, x = 2, and x = 4.”

That’s progress, I suppose. I then tried to ask about the incorrect solution: “Are you sure about x = 1.41?”

But my time was up: “Too many requests in 1 hour. Try again later.”

Fair enough. I know many people are clamoring for some time with the bot.

So, why the deception?

Many of the times ChatGPT got it wrong in my interactions with it could be explained by confusion: it seems to make an attempt to extrapolate or interpret data and simply falls short. But in other cases, it seems to simply “fill in the blanks” of the things it does not know by confidently generating plausible-sounding answers with no basis in fact. Why not just say “I don’t know”?

To answer that question, we have to understand better what ChatGPT is and is not, and in particular, how it is trained.

As Rob Miles explains in this informative Computerphile video, any AI trained through reinforcement learning from human feedback depends on that feedback prioritizing accurate output over other factors. If an honest “I don’t know” output is rewarded less than the bot “taking a stab at it” and producing possibly flawed output, then the bot will learn to avoid acknowledging that it doesn’t know something and will instead tend to produce output that could seem to answer the question, regardless of accuracy.

“There is an incentive for deception,” Miles says. Host Sean Riley likens models trained in this way to politicians: “It tells you what you want to hear.” An AI model’s willingness to lie to its users is the natural result of feedback that values responsiveness (i.e. the AI produces an answer, right or wrong, for the question asked) more than accuracy, in particular an accurate acknowledgement of what the model does not know. When the model is punished (relatively speaking) for failure to produce a satisfying answer, but rewarded for producing authoritative sounding but wrong output, it adjusts its behavior accordingly.

But seriously, it’s just a chatbot, right?

Miles points out that there is a difference between a language model saying something foolish or dangerous, and having the ability to do something foolish or dangerous in the real world.

“But there is a kind of a fine line there,” he adds, “in the sense that you can expect these kinds of language model systems to be used as part of bigger systems,” which may make real-world decisions based in part on output from these models.

“Extremely powerful systems trained in this way… I don’t think would be safe,” he concludes.

Meanwhile, Microsoft is pushing forward in an effort to revolutionize Internet searching by incorporating OpenAI technology as Google fine-tunes its own AI-assisted search engine. We are likely to be dealing with this technology much more in our daily lives, whether we know it or not.

Chasing that shiny old sun

I’ll wrap up this piece by sharing a couple of tips ChatGPT offered when I asked it about solar safety.

Q: What is the best way to keep pets from staring directly at the sun?

A: The best way to prevent pets from staring directly at the sun is to keep them indoors or in the shade during peak sun hours, typically between 10 AM and 4 PM. If you take your pet outside during these times, make sure they have access to shade or an area where they can avoid direct sunlight. It’s also important to protect your pet’s eyes with sunglasses or other protective eyewear if they have to be in the sun for an extended period of time.

It’s also important to remember that some pets, such as dogs, may have a natural instinct to chase bright objects, including the sun. In these cases, it’s best to keep your pet on a leash or in a secure, enclosed area to prevent them from chasing the sun and potentially damaging their eyes.

It’s always best to consult with a veterinarian if you have any concerns about your pet’s exposure to the sun and their eye health.

Q: What is the best way to keep elected officials from staring directly at the sun?

A: There is no specific way to prevent elected officials from staring directly at the sun as it is not a physical or health risk to them in the same way it is to pets. However, it is important for elected officials to maintain their focus and attention on their responsibilities and to make decisions in the best interest of their constituents. They can do this by staying informed on important issues, consulting with experts and stakeholders, and engaging in thoughtful, evidence-based decision making. Ultimately, it is up to the elected officials themselves to prioritize their responsibilities and avoid distractions, including staring at the sun.

Advertisement

Posted

in

by

Tags:

Comments

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: