• New Horizons on Maelstrom
    Maelstrom New Horizons


    Visit our website www.piratehorizons.com to quickly find download links for the newest versions of our New Horizons mods Beyond New Horizons and Maelstrom New Horizons!

Need Help Keys (?) keep resetting

Other idea:
Edit BuildDefaultControls.h
Then when it happens, you only need to Reset Controls to Default from inside the game menu.
That does not remove your other game preferences and should be quicker.
 
Only explanation I can think of is that whatever garbles the options file strikes fast for you.
But we never found out where that comes from.

It's an engine problem where sometimes the null terminator for the strings gets overwritten by a non-null and now the convention of reading strings expecting a null terminator, and not finding one until it overreaches into other non-relevant memory/garbage corrupts it. You can't solve it with anything in the scripts. That problem was corrected in my version.
 
Other idea:
Edit BuildDefaultControls.h
Then when it happens, you only need to Reset Controls to Default from inside the game menu.
That does not remove your other game preferences and should be quicker.
I think you mean "DefaultControls.h", since "BuildDefaultControls.h" doesn't exist anymore.

A couple of errors in the code I've picked up so far, Pieter:

init_pc.c:

1. There are nested comments in the commented out segments. The interpreter may tolerate it in this instance, but this is generally bad practice in C:

/* This
is fine. */

// This is fine.

/* But //this
is not semantically right. */

2. On line 753 you have a special character in the comment ("\") that causes line 754 to be ignored entirely when the code is executed. The backslash in C is a special character that denotes that the linebreak should be ignored. This results in "objControlsState.key_codes.VK_OEM_5.img" never being set.

3. It's also worth noting that lines 699, 735, 739, 743, 749, 751, 755, and 757 contain non-unicode-compliant characters. You have a mixed file encoding on "init_pc.c". (Which may or may not cause an error with the interpreter.) Most notable is line 699 here, in which the non-compliant character is processed directly in the code.

controls.c:

1. Lines 183 and 333 contain a label that is not a properly formatted, and may cause problems with the code: "NK_Key_ ". I suggest renaming this to "NK_Key_space".
 
Last edited:
How do I say this delicately? Ignore that last Cerez post.

What you need to remember is that the "compiler" for the game, that parses and interprets the script code does not conform to any standards, and only understands what it was coded to do.

So in the case of the backslash character, it does not care, and it is not special, so it does not ignore eol and is also why you don't need \\ to escape.

It was compiled multi-byte, not unicode, and does not even support or understand unicode. In fact, one of the things with text files is they have to be ANSI, else if they are UTF-8, as some were found to be when some GOF modders saved files, is they don't work correctly. For cases where someone wants/needs UTF-8 on a certain text file, I had to build in a special flag placed at the top so that if that is present, then it would convert.

And don't change that space, because the parser for control keys very specifically looks for either an ASCII number, or the exact character and would not understand the complete word "space" after the underscores; that contradicts the convention it is specifically looking for.
 
@ChezJfrey, this doesn't explain to me why the original code of the game is kept unicode-compliant, and this file shows up as invalid. The term "multi-byte" is also very ambiguous -- it's not a standard to adhere to.

Mixed file encodings are known to cause problems with building and execution.

Just because the engine/interpeter tolerates deviation for proper C formatting doesn't mean we should deviate from the rules. This is bad practice in programming, and potentially causes unforeseen problems down the line. Certainly in the case of the backslash here, the deviation/breach is entirely avoidable: write "backslash" in the comment.

And don't change that space, because the parser for control keys very specifically looks for either an ASCII number, or the exact character and would not understand the complete word "space" after the underscores; that contradicts the convention it is specifically looking for.

Really? :nerbz Then how do you explain this?

Code:
CI_CreateAndSetControls( "", "NK_KB_caps", CI_GetKeyCode("VK_CAPITAL"), 0, false);
CI_CreateAndSetControls( "", "NK_KB_left", CI_GetKeyCode("VK_LEFT"), 0, false);
CI_CreateAndSetControls( "", "NK_KB_right", CI_GetKeyCode("VK_RIGHT"), 0, false);
objControlsState.keys.key_8 = "NK_KB_back";
objControlsState.keys.key_27 = "NK_KB_esc";
objControlsState.keys.key_13 = "NK_KB_enter";
 
Let's try this one more time, @Felkvir. If it still doesn't work, you can take Pieter's and ChezJfrey's advice, modify "DefaultControls.h" to you liking (after backing it up) and rely on the 'reset controls to default' button to return your controls to normal in-game whenever they go beserk.

I've attached both files this time, cleaned up. Let's see if the bug still occurs. Make sure you start with a clean slate in regards to the "options" file.
 

Attachments

  • files.zip
    10.7 KB · Views: 137
It's an engine problem where sometimes the null terminator for the strings gets overwritten by a non-null and now the convention of reading strings expecting a null terminator, and not finding one until it overreaches into other non-relevant memory/garbage corrupts it. You can't solve it with anything in the scripts. That problem was corrected in my version.
Thanks very much for comfirming, @ChezJfrey!
That explains a lot and makes perfect sense. :doff

I think you mean "DefaultControls.h", since "BuildDefaultControls.h" doesn't exist anymore.
Oops; yep!
I was writing from memory on my phone.

This is bad practice in programming, and potentially causes unforeseen problems down the line.
I'm convinced the PotC:NH mod code is absolutely LITERRED with bad programming practices.
It's been written by so many people with wildly varying levels of coding expertise and standards.
It's honestly a complete miracle that it still generally works as well as it does!
The game engine seems really bizarrely forgiving...

This is both a blessing and a curse.
It's completely possible for the most INSANE errors to occur that no sane coder could ever see coming.
I remember one time where a perfect line of code didn't work right.
But it worked fine if I put a completely meaningless statement right above it.
Apparently there was 'garbage' there and by collecting that somewhere it wouldn't do harm, it could behave itself again afterwards.

Took me forever to figure that one out.
Managed it through pure trial and error.
Remember that I basically have no background in IT at all.
I still don't understand what's the big deal with different Unicode-y standards.
Why can't flat text not just always be the same?

Don't bother trying to explain it to me.
Some things are pretty much beyond my understanding.
I'm still only really a user...
 
That explains a lot and makes perfect sense. :doff
Or does it? :nerbz

I still don't understand what's the big deal with different Unicode-y standards.
Why can't flat text not just always be the same?
It's really quite simple: Text files are also binary files, and they need to define and contain characters somehow. The way they represent certain characters is called encoding. When a code parser reads a text file, it usually needs to know what format the text file was encoded in, so that it can interpret the characters/text in it correctly. Having the wrong format, or a mixed encoding, can lead to the parser misreading certain characters, and that can lead to an obstruction or error in the execution of the code.

Unicode is a multi-language, wide character range standard for encoding text files. ANSI are a range of old, American/Windows text encoding standards, based on ASCII, back from the DOS days, and they don't support international characters as unicode (UTF) does -- although efforts have been made to improve their international character support.

Edit:

Unicode (UTF-8) has been around since 1993, so I don't see why Akella would not have used it as a standard for their code/interpreter, especially since they are a Russian company, and were developing for an international market. It doesn't make sense to go with a limited, ANSI encoded character range.

I haven't encountered any issues running unicode-compliant code in CT, although CT is a newer game, with a newer version of the engine. But even the original, first Sea Dogs game, with the first public release of the engine, came out in 2000! It's not that old. :no
 
Last edited:
Here's a good, simple technical summary of what unicode is:

"Unicode is character standard to represent alphabets of all the languages of world. Normally ASCII character codes are used in other languages because these characters are represented by simply one byte -- that's why the range of these characters is only from 0-255. We can represent 256 character/symbols etc. in ASCII. But unicode is character coding based on two bytes -- that's why the range is 0-65535, meaning 65536. So we can represent all the language alphabets that exist in this world because the 65536 range is sufficient to accomodate the characters of most common languages around the world."

In fact, one of the things with text files is they have to be ANSI, else if they are UTF-8, as some were found to be when some GOF modders saved files, is they don't work correctly. For cases where someone wants/needs UTF-8 on a certain text file, I had to build in a special flag placed at the top so that if that is present, then it would convert.
The special flag you are referring to here, Chez, is the unicode byte order mark (BOM) -- it makes for a properly fomatted unicode encoded document, and it used to be required/implemented for every unicode file. In recent times, some standard-deviating text editors will omit the BOM, and assume/predict a unicode encoding simply from the structure of the document -- since unicode has become the mainstream standard, and the BOM was intended to be optional by design.

So this doesn't mean that the engine doesn't support unicode -- on the contrary. The trick is to use a professional code editor, that properly formats document encodings. The reason the engine had trouble with the unicode documents without the BOM saved in them is that it assumed they are ASCII (single-byte), and therefore misread their contents. The Storm engine needs the BOM in a unicode file in order to know that it is in fact unicode.

What this all means, effectively, is that the engine supports both (properly formatted) unicode (two-byte) and ANSI-formatted (single-byte) document encoding standards, but that it relies on the BOM to differentiate between them. (Which is proper behaviour, and makes a whole lot more sense.) Having a mixed encoding in this case (which, again, is a really bad practice) will likely result in the engine reading the document as a single-byte, ASCII encoded file.

All this doesn't mean that we can do whatever we want -- on the contrary. Documents and code should still be properly formatted. The interpreter's flexibility/leeway is there not to break the game if there are accidental mistakes -- not to abuse it.

And if you still doubt my words, consider that the reason Caribbean Tales shipped with half the game broken is because of a simple, minor semantic error in one of the key files. My fixing that one small, hard to track down error restored the intended functionality of half the game!

Messy code creates hard-to-track, phantom bugs. A good, experienced programmer codes clean.

(So please do not ignore what I said. :rolleyes:)
 
Last edited:
Let's try this one more time, @Felkvir. If it still doesn't work, you can take Pieter's and ChezJfrey's advice, modify "DefaultControls.h" to you liking (after backing it up) and rely on the 'reset controls to default' button to return your controls to normal in-game whenever they go beserk.

I've attached both files this time, cleaned up. Let's see if the bug still occurs. Make sure you start with a clean slate in regards to the "options" file.

The files didn't work. What is the reset controls to default button? It doesn't say in DefaultControls.h if its supposed to be there.
 
What is the reset controls to default button? It doesn't say in DefaultControls.h if its supposed to be there.
It's in the game. When you enter the controls menu, there should be a 'reset controls to default' button there. (Not sure what the actual title it carries is.)

As for the "DefaultControls.h" file, back it up for safe measure, have a look inside (open it in a text/code editor), and it's pretty self-explanatory:

Copy and paste the keys around into the right action to match them with your custom controls setup, while carefully preserving the structure of things in the document as they are. (Pay attention to the double-quotes enclosing the keys.)

Save, and then you can test your changes by pressing that 'reset controls to default' button in the in-game menu, and trying your controls out in the game.

Once you have them reverting to your custom setup as default, whenever the game glitches out again with the controls, you can simply press that revert to defaults button to quickly fix it.
 
Last edited:
Ok it seems to work by using that button in-game but I encountered a new issue, when I'm in combat with some ships and pressing enter to select commands it starts switching the view so I'm looking at other ships... wtf?
 
The special flag you are referring to here, Chez, is the unicode byte order mark (BOM) )

No, I'm not. I guarantee you the engine does NOT read the BOM and it will not decipher UTF-8 (or any other UTF) properly; the only time it gets them "correct" is because the first 128 characters of those two sets happen to correspond. The engine ALWAYS assumes ASCII and uses the ASC function to get the code. Every time. I can see the actual code, so I know I am correct.

The "special flag" I put, was only for one instance, in the fonts.ini file, so that it will instead assume all interpretation when it further reads that particular font selection as UTF-8 and then all the files better be UTF-8. I did not take the time to do a more elegant and sophisticated solution due to time and only one case so far needed this so far.

The engine is compiled is VS as "Use Multi-byte Character Set" and not "Use Unicode Character Set" so all default actions do not conform to Unicode; there are however functions I can call specifically to convert, if possible: MultiByteToWideChar assuming UTF8.
 
Ok it seems to work by using that button in-game but I encountered a new issue, when I'm in combat with some ships and pressing enter to select commands it starts switching the view so I'm looking at other ships... wtf?
From now on, you'll need to upload your custom edited "DefaultControls.h" file with a controls-related question like this for us to see what is happening and be able to offer you advice.

Have you installed any other mods to the game, apart from this one (NH)?
 
No, I'm not. I guarantee you the engine does NOT read the BOM and it will not decipher UTF-8 (or any other UTF) properly; the only time it gets them "correct" is because the first 128 characters of those two sets happen to correspond. The engine ALWAYS assumes ASCII and uses the ASC function to get the code. Every time. I can see the actual code, so I know I am correct.
If you're using Visual Studio for development, then you're in a bit of a pickle to begin with: VS does not support unicode properly:
About the "Character set" option in Visual Studio

It's not the game, it's your development environment to begin with.

Note that unicode is backwards-compatible with ASCII. Your "multi-byte" encoding setting (really confusing terminology used by Microsoft) supports both single-byte (ASCII) and two-byte (UTF-8) characters, but with a catch: it doesn't support unicode properly -- that's why they didn't call it UTF-8.

See also:
Unicode and Multibyte Character Set (MBCS) Support

The Microsoft VS "unicode" encoding setting will actually use 16-bit unicode (UTF-16). This is a newer encoding type that I do not expect the Storm engine to support.

There is much confusion here because of Microsoft's (deliberate) weird naming and reluctance (or inability) to support UTF-8 properly.

This makes Visual Studio, or any other Microsoft product, a poor choice for developing for the game with.

The "special flag" I put, was only for one instance, in the fonts.ini file, so that it will instead assume all interpretation when it further reads that particular font selection as UTF-8 and then all the files better be UTF-8. I did not take the time to do a more elegant and sophisticated solution due to time and only one case so far needed this so far.
Care to specify what that "special flag" actually means? I don't like riddles.

The engine is compiled is VS as "Use Multi-byte Character Set" and not "Use Unicode Character Set" so all default actions do not conform to Unicode; there are however functions I can call specifically to convert, if possible: MultiByteToWideChar assuming UTF8.
As pointed out above, this statement is incorrect.

MultiByteToWideChar will write/assume UTF-16. You do not want to use this. The Microsoft VS "multi-byte" setting will allow for both ASCII and UTF-8 (with limited support).
 
Last edited:
I use Visual Studio, because that is the original project, it uses a bunch of Microsoft Windows API and Windows-only macros in the code. Since this was derived from the original, the cd versions of the game are built the same way, so I see a curious contradiction that the "original" game supposedly supports this per a previous post, but mine won't 'cause Visual Studio?

The special flag I use is at the top of the fonts.ini file that indicates the language/font being used should interpret all the codes with UTF-8 is just text inside the file:

UTF8toANSI = 1

That tells me that the code values from the strings being read should be converted to UTF-8 chars, instead of ASCII. Again, it is a temporary hack because this is not a high priority for me, as the mandate is to adhere to the constraints of the original game, but someone needed a way around it, so rather than code everything proper and decipher based on BOM (which would be a chore, considering there is a bunch of separate .ini and text file reading scattered everywhere and I would need to consolidate to a proper, single handler), I punted and did this way for now. Everybody else knows to just stick to what the game wants.

MultiByteToWideChar will convert to UTF8 with the first param set to the code page constant Microsoft defined: MultiByteToWideChar(CP_UTF8...
 
I use Visual Studio, because that is the original project, it uses a bunch of Microsoft Windows API and Windows-only macros in the code. Since this was derived from the original, the cd versions of the game are built the same way, so I see a curious contradiction that the "original" game supposedly supports this per a previous post, but mine won't 'cause Visual Studio?
While it's technically possible that the POTC engine relies solely on ASCII encoding in its text files and code, this doesn't make much sense at all in terms of development for a multi-language product. That would be about as bad a practice/implementation as what Visual Studio is doing here with standard unicode, or worse.

Furthermore, I'll do more digging into this, but CT seems to support unicode encoded documents without a problem (as long as they have the BOM) -- so if this is indeed true, it may be POTC-and-earlier-exclusive.

The special flag I use is at the top of the fonts.ini file that indicates the language/font being used should interpret all the codes with UTF-8 is just text inside the file:

UTF8toANSI = 1

That tells me that the code values from the strings being read should be converted to UTF-8 chars, instead of ASCII. Again, it is a temporary hack because this is not a high priority for me, as the mandate is to adhere to the constraints of the original game, but someone needed a way around it, so rather than code everything proper and decipher based on BOM (which would be a chore, considering there is a bunch of separate .ini and text file reading scattered everywhere and I would need to consolidate to a proper, single handler), I punted and did this way for now. Everybody else knows to just stick to what the game wants.
I see. Thanks for clarifying.

MultiByteToWideChar will convert to UTF8 with the first param set to the code page constant Microsoft defined: MultiByteToWideChar(CP_UTF8...
While this is a working solution, note that even Microsoft recommends avoiding this, and using a clean/proper unicode implementation instead, throughout all the application's files/documents if possible:
MultiByteToWideChar function (stringapiset.h) - Win32 apps

"The ANSI code pages can be different on different computers, or can be changed for a single computer, leading to data corruption. For the most consistent results, applications should use Unicode, such as UTF-8 or UTF-16, instead of a specific code page, unless legacy standards or data formats prevent the use of Unicode. If using Unicode is not possible, applications should tag the data stream with the appropriate encoding name when protocols allow it. HTML and XML files allow tagging, but text files do not."
 
Last edited:
Furthermore, I'll do more digging into this, but CT seems to support unicode encoded documents without a problem (as long as they have the BOM) -- so if this is indeed true, it may be POTC-and-earlier-exclusive.
My bad. :no Further examination of the files shows that all the files in the game were originally encoded in ISO-8859-1, even the Russian language ones -- and I was working with ISO-8859-1 while under the impression I was using standard unicode.

New Horizon's files, however, show a mixture of UTF-8 and ISO-8859-1 encoded documents -- likely due to the fact that most people consider UTF-8 to be the standard for the character encoding these days (as I did), and saved as such (converting the files from ISO-8859-1 to UTF-8).

If the engine indeed does not support UTF-8 fully (only the ASCII-compliant portion of it), this can lead to unforeseen bugs/errors. I'll try running a test to see if this is the case. I still doubt it, and it would be a rather odd development decision from Akella.

But if this is the case, care should be taken that people always save the files as ISO-8859-1 encoded when editing Sea Dogs (including NH) code in the future. UTF-8 encoded (or any other format) files should not make it into the build in that case.
 
Last edited:
From now on, you'll need to upload your custom edited "DefaultControls.h" file with a controls-related question like this for us to see what is happening and be able to offer you advice.

Have you installed any other mods to the game, apart from this one (NH)?

No I have not. I have not changed default controls either, as I didn't need any custom setup.
 
No I have not. I have not changed default controls either, as I didn't need any custom setup.
If you have not yet done so, make sure you restore the original files from the "backup" folder to replace the ones I gave you during testing.

Also, I'm curious, what version of Windows are you running and in what language?

If this 'enter' key problem is indeed new to your game, try ditching the "options" file again. It may have gotten corrupted.
 
Last edited:
Back
Top