Welcome to NexusFi: the best trading community on the planet, with over 150,000 members Sign Up Now for Free
Genuine reviews from real traders, not fake reviews from stealth vendors
Quality education from leading professional traders
We are a friendly, helpful, and positive community
We do not tolerate rude behavior, trolling, or vendors advertising in posts
We are here to help, just let us know what you need
You'll need to register in order to view the content of the threads and start contributing to our community. It's free for basic access, or support us by becoming an Elite Member -- see if you qualify for a discount below.
-- Big Mike, Site Administrator
(If you already have an account, login at the top of the page)
I am tring to make a custom indicator in NT7 write to a text file when all conditions are met, this text file will be monitored every 1 second by another 3rd party program.
This is how it works:
1, When all conditions are met, the custom indicator write a new line to the text file, named "output.txt" for example.
2, Every 1 second, the 3rd party open the "output.txt" text file and see if a new line has been written to the file, if yes, then it will read this line into its own platform, and delete the entire content of the "output.txt" text file.
Based on my understanding, there are two methods to write to a text file from Ninjatrader indicators, one is StreamWriter, the other is System.IO File properties.
My question is:
What would happen if when the 3rd party is opening the "output.txt" text file, the NT indicator is trying to open and write to the same file at the same moment? Will there be any conflict or error message? Is there anyway to avoid this?
Any help will be greatly appreciated, thanks!
Can you help answer these questions from other members on NexusFi?
hi
it is some time ago that this was asked, but anyway, the digital is without age (almost... Yet, such things pop up all the time, for everyone developing any kind of system.
Something similar as to what I will describe here below has been proposed in this thread "
Problem writing to file during strategy optimisation" [ninjatrader.com]/support/forum/forum/ninjatrader-7/strategy-development-aa/28975-problem-writing-to-file-during-strategy-optimisation/page2?t=28261&highlight=view360; (sorry for this encoding, I am not yet allowed to post links...)
... you will find lots of code there, most of which is not relevant, but not a clear overview.
Particularly, a hint to a database based solution is missing.
Memory Mapped Files and Named Pipes seem to be alternative at first time, but mutual locks may even appear here. They do not resolve the issue of interop if there could be collisions.
TCP or UDP on the other hand may use one channels per direction, but the other side needs to be able to handle and to integrate the received packages in a multi-threaded manner, which easily runs into tricky terrain.
In general, exchanging data between two different processes should not use any kind of real-time (real mean "real"!) synchronization in order to avoid mutual blockage of
- different applications, which naturally run in different "threads"
- different threads within a single application.
As long as you do not control an industrial plant or want to perform Highest-frequency trading, you would not need a real-time sync. 0.1 seconds are just good enough to be rated as concurrent.
--------
So, how to do it?
The main piece is to differentiate between producer and consumer.
The producer always should create a unique filename (guid, timestamp+id, etc.) and write the information in a standardized way with meta data like this
[meta]
time= ....
id= ....
other stuff=
[data]
.....
Of course you can use XML as well
The duty of selecting the proper data and cleaning up is completely on the side of the consumer.
The consumer would decide which data effectively to use, to merge or to delete.
Note that in case of a multi-threaded producer == multi-threaded production a lock object is not (completely) appropriate. Either it slows down your multi-threading, or produced data may be even lost. But that depends on the frequency of the write events.
Of course, you could use a data base like sqlite for the whole stuff, which allows concurrent writes, leaving the sorting out of prevalence and storage as a task at the side of the data base. The uniqueness of the entries is also guaranteed by the database.
This would put you, from the architectural perspective, to a proper decoupling and a multi-tier design of your stuff.
btw, sqlite has a rest interface, so there could be very little programming necessary to send data to it. It is probably the most lean solution for the task of interop between applications.