A d12 is superior to every other dice shape. Not only is it highly composite, but it also is less likely to roll of the side of a table and feels better in the hand.
A d12 is superior to every other dice shape. Not only is it highly composite, but it also is less likely to roll of the side of a table and feels better in the hand.
Let’s play a little game, then. We bothe give each other descriptions of the projects we made, and we try to make the project based on what we can get out of ChatGPT? We send each other the chat log after a week or something. I’ll start: the hierarchical multiscale LSTM is a stacked LSTM where the layer below returns a boundary state which will cause the layer above it to update, if it’s true. the final layer is another LSTM that takes the hidden state from every layer, and returns a final hidden state as an embedding of the whole input sequence.
I can’t do this myself, because that would break OpenAI’s terms of service, but if you make a model that won’t develop I to anything, that’s fine. Now, what does your framework do?
Here’s the paper I referenced while implementing it: https://arxiv.org/abs/1807.03595
Sorry that my personal experience with ChatGPT is ‘wrong.’ if you feel the need to insult everyone who disagrees with you, that seems like a better indication of your ability to communicate than mine. Furthermore, I think we’re talking about different levels of novelty. You haven’t told me the exact nature of the framework you developed, but the things I’ve tried to use ChatGPT for never turn out too well. I do a lot of ML research, and ChatGPT simply doesn’t have the flexibility to help. I was implementing a hierarchical multiscale LSTM, and no matter what I tried ChatGPT kept getting mixed up and implementing more popular models. ChatGPT, due to the way it learns, can only reliably interpolate between the excerpts of text it’s been trained on. So I don’t doubt ChatGPT was useful for designing your framework, since it is likely similar to other existing frameworks, but for my needs it simply does not work.
ChatGPT has never worked well for me. Sure, it can tell you how to center a div, but for anything complex it just fails. ChatGPT is really only useful for elaborating on something. You can give it a well commented code snippet, ask it to add some simple feature to it, and it will sometimes give a correct answer. For coding, it has the same level of experience as a horde of highschool CS students.
Whom is this directed to?
You’re right. Troi’s and Data’s hands are messed up, Data has unreal wrinkles on his forehead, the shadow on Picard’s neck seems to be a dent, and of course, Troi’s nose has a different camera angle on either side.
I only vaguely really know what’s going on. I did some more research after commenting, and I think I understand a little bit more. The TI bubble memory has two separate layers. On of them, the ‘magnetic epitaxial film’, basically has a lot of magnetic molecules arranged to point in the same direction. The second layer has circles made of some nickel-iron alloy. What I think is happening is that the actual magnetic bubbles are held on the film, and the iron circles act as tracks the bubbles are pulled along. I don’t think electrons in the bubble are actually moving, but I think the electron spin is. That would explain why the loops are capable of moving the bubbles faster than electrons.
Just from a quick Google search, it looks like it’s similar to tape memory, except the data moves along the tape, instead of the tape moving over the reading head. According to diagram by TI, it looks like the bubbles are on some iron wafer and forcibly moved around by two coils. Then, on a second substrate there are some type of read & write head.
So here’s how I would go about this: first, I’d wrap some small metal plates in insulated magnet wire, place two permanent magnets on the top and bottom (sandwich style) and stick a read head on the edge of the plate. Then you push AC current through the two coils offset by 90 degrees. This should push the bubble in a circle, and that can be read by the tape head.
Keep in mind though, this is a complete guess based on a simplified diagram from the 70s. I don’t actually know if this is how they work.
All of science is based on the assumption that what is observed and experienced exists. You cannot gather data without at some point experiencing some representation of that data. In this sense, qualia is the most real thing possible, because experience is the essence of evidence.
I’m not sure I entirely understand your argument. “We decide it exists, therefore it exists” is the basis of all science and mathematics. We form axioms based on what we observe, then extrapolate from those axioms to form a coherent logical system. While it may be a leap of logic to assume others have consciousness, it’s a common decency to do that.
Onto the second argument, when I mean “what signal is qualia” I’m talking about what is the minimum number of neurons we could kill to completely remove someone’s experience of qualia. If we could sever the brain stem, but that would kill an excess of cells. We could kill the sensory cortex, but that would kill more cells than necessary. We could sever the connection between the sensory cortex and the rest of the brain, etc. As you minimize the number of cells, you move up the hierarchy, and eventually reach the prefrontal cortex. But once you reach the prefrontal cortex, the neurons that deliver qualia and the neurons that register it can’t really be separated.
Lastly, you said that assuming consciousness is some unique part of the universe is wrong because it cannot be demonstrably proven to exist. I can’t really argue against this, since it seems to relate to the difference in our experience of consciousness. To me, consciousness feels palpable, and everything else feels as thin as tissue paper.
Here’s another way of framing it: qualia, by definition, is not measurable by any instrument, but qualia must exist in some capacity in order for us to experience it. So, me must assume that either we cannot experience qualia, or that qualia exists in a way we do not fully understand yet. Since the former is generally rejected, the latter must be true.
You may argue that neurochemical signals are the physical manefestation of qualia, but making that assumption throws us into a trap. If qualia is neurochemical signals, which signals are they? By what definition can we precisely determine what is qualia and what is not? Are unconscious senses qualia? If we stimulated a random part of the brain, unrelated to the sensory cortex, would that create qualia? If the distribution of neurochemicals can be predicted, and the activations of neurons was deterministic as well, would calculating every stimulation in the brain be the same as consciousness?
In both arguments, consciousness is no clearer or blurrier, so which one is correct?
Okay, but what is sparc and pa-risc?
There are actually two standards here. Kibibytes was introduced later as a way to reduce confusion cause by the uninitiated thinking the JEDEC standard refered to powers of ten instead of two. That’s why I’m saying that 64 kilobytes is equal to 2^16 bytes, because that’s what the original standard was.
I still use mb and kb as 1024 instead of 1000, because I prefer to not have units switched around from under me. 2^16 will always address 64kb, not 65.
This is definitely real and not propaganda because the Chinese economy is in a huge downturn.
How is the concept of democracy a scam?
Probably Ultima ratio regum, found it on tig source, I have no idea how to actually play it, but it’s got big ambitions and is already pretty impressive. https://forums.tigsource.com/index.php?topic=22176.0
The big issue I have with brain chips is longevity. How long until the electrodes degrade? When will the chips fail? Once they fail, will it be fail safe or fail deadly? Also, what will be the power source? Will it use inductive power, or battery power? They are both awful options. What if the chip overheats? The implementation is the real question here, but neuralink refuse to give any answers because it proprietary.
Tolkien was primarily a linguist, so the languages he made were actually based on real languages. Tolkien elvish is based on Finnish.