date
stringlengths
10
10
nb_tokens
int64
60
629k
text_size
int64
234
1.02M
content
stringlengths
234
1.02M
2018/03/16
904
3,444
<issue_start>username_0: I'm relatively new to Java8 and I have a scenario where I need to retrieve all the keys from the Map which matched with the objects. Wanted to know if there is a way to get all keys without iterating them from the list again. ``` Person.java private String firstName; private String lastName; //setters and getters & constructor MAIN Class. String inputCriteriaFirstName = "john"; Map inputMap = new HashMap<>(); Collection personCollection = inputMap.values(); List personList = new ArrayList<>(personCollection); List personOutputList = personList.stream() .filter(p -> p.getFirstName().contains(inputCriteriaFirstName )) .collect(Collectors.toList()); //IS There a BETTER way to DO Below ?? Set keys = new HashSet<>(); for(Person person : personOutputList) { keys.addAll(inputMap.entrySet().stream().filter(entry -> Objects.equals(entry.getValue(), person)) .map(Map.Entry::getKey).collect(Collectors.toSet())); } ```<issue_comment>username_1: ``` inputMap.entrySet() .stream() .filter(entry -> personOutputList.contains(entry.getValue())) .map(Entry::getKey) .collect(Collectors.toCollection(HashSet::new)) ``` Upvotes: 5 [selected_answer]<issue_comment>username_2: Instead of iterating over all the entries of the Map for each `Person`, I suggest iterating over the Map once: ``` Set keys = inputMap.entrySet() .stream() .filter(e -> personOutputList.contains(e.getValue())) .map(Map.Entry::getKey) .collect(Collectors.toCollection(HashSet::new)); ``` This would still result in quadratic running time (since `List.contains()` has linear running time). You can improve that to overall linear running time if you create a `HashSet` containing the elements of `personOutputList`, since `contains` for `HashSet` takes constant time. You can achieve that by changing ``` List personOutputList = personList.stream() .filter(p -> p.getFirstName().contains(inputCriteriaFirstName)) .collect(Collectors.toList()); ``` to ``` Set personOutputSet = personList.stream() .filter(p -> p.getFirstName().contains(inputCriteriaFirstName)) .collect(Collectors.toCollection(HashSet::new)); ``` Upvotes: 3 <issue_comment>username_3: You can also use **foreach** api provided in java8 under lambda's Below is code for your main method : ``` public static void main() { String inputCriteriaFirstName = "john"; Map inputMap = new HashMap<>(); Set keys = new HashSet<>(); inputMap.forEach((key,value) -> { if(value.getFirstName().contains(inputCriteriaFirstName)){ keys.add(key); } }); } ``` Upvotes: 3 <issue_comment>username_4: So, you want a `personOutputList` with all the selected persons, and a `keys` set with the keys for those selected persons? Best (for performance) option is to not discard the keys during search, then split the result into separate person list and key set. Like this: ``` String inputCriteriaFirstName = "john"; Map inputMap = new HashMap<>(); Map tempMap = inputMap.entrySet() .stream() .filter(e -> e.getValue().getFirstName().contains(inputCriteriaFirstName)) .collect(Collectors.toMap(Entry::getKey, Entry::getValue)); List personOutputList = new ArrayList<>(tempMap.values()); Set keys = new HashSet<>(tempMap.keySet()); ``` The `keys` set is explicitly made an updatable copy. If you don't need that, drop the copying of the key values: ``` Set keys = tempMap.keySet(); ``` Upvotes: 1
2018/03/16
949
3,316
<issue_start>username_0: What is the size (in bytes) for the following structures on a 32­bit machine installed with the Linux Operating System? sizeof(int) = 4 bytes sizeof(short) = 2 bytes sizeof(char) = 1 byte **1** ``` struct foo { int d1; char c1; int d2; } ``` **2** ``` struct foo { int d1; char c1; int d2; char c2; short s; }; ``` **3** ``` struct foo{ int d1; int d2; char c1; char c2; short s; }; ``` **4** ``` struct foo { char c1; int d1; short s; int d2; char c2; }; ``` ANSWERS: 12, 16, 12, 20 Could someone please explain how to find these answers? I am very confused and not understanding the concept here. Can someone explain the padding requirement for each? That's my confusion. I don't understand whats required. !<issue_comment>username_1: ``` inputMap.entrySet() .stream() .filter(entry -> personOutputList.contains(entry.getValue())) .map(Entry::getKey) .collect(Collectors.toCollection(HashSet::new)) ``` Upvotes: 5 [selected_answer]<issue_comment>username_2: Instead of iterating over all the entries of the Map for each `Person`, I suggest iterating over the Map once: ``` Set keys = inputMap.entrySet() .stream() .filter(e -> personOutputList.contains(e.getValue())) .map(Map.Entry::getKey) .collect(Collectors.toCollection(HashSet::new)); ``` This would still result in quadratic running time (since `List.contains()` has linear running time). You can improve that to overall linear running time if you create a `HashSet` containing the elements of `personOutputList`, since `contains` for `HashSet` takes constant time. You can achieve that by changing ``` List personOutputList = personList.stream() .filter(p -> p.getFirstName().contains(inputCriteriaFirstName)) .collect(Collectors.toList()); ``` to ``` Set personOutputSet = personList.stream() .filter(p -> p.getFirstName().contains(inputCriteriaFirstName)) .collect(Collectors.toCollection(HashSet::new)); ``` Upvotes: 3 <issue_comment>username_3: You can also use **foreach** api provided in java8 under lambda's Below is code for your main method : ``` public static void main() { String inputCriteriaFirstName = "john"; Map inputMap = new HashMap<>(); Set keys = new HashSet<>(); inputMap.forEach((key,value) -> { if(value.getFirstName().contains(inputCriteriaFirstName)){ keys.add(key); } }); } ``` Upvotes: 3 <issue_comment>username_4: So, you want a `personOutputList` with all the selected persons, and a `keys` set with the keys for those selected persons? Best (for performance) option is to not discard the keys during search, then split the result into separate person list and key set. Like this: ``` String inputCriteriaFirstName = "john"; Map inputMap = new HashMap<>(); Map tempMap = inputMap.entrySet() .stream() .filter(e -> e.getValue().getFirstName().contains(inputCriteriaFirstName)) .collect(Collectors.toMap(Entry::getKey, Entry::getValue)); List personOutputList = new ArrayList<>(tempMap.values()); Set keys = new HashSet<>(tempMap.keySet()); ``` The `keys` set is explicitly made an updatable copy. If you don't need that, drop the copying of the key values: ``` Set keys = tempMap.keySet(); ``` Upvotes: 1
2018/03/16
447
1,671
<issue_start>username_0: I don't know if is it possible: I want to know the number line's where someWord is found in someFile. ``` try { CharsetDecoder dec = StandardCharsets.UTF_8.newDecoder() .onMalformedInput(CodingErrorAction.IGNORE); try (Reader r = Channels.newReader(FileChannel.open("path"), dec, -1); BufferedReader br = new BufferedReader(r)) { br.lines().filter(line -> line.contains("SomeWord")) .forEach(line -> System.out.println("location:" + line.????)); //Location where line has the "SomeWord" } } catch (IOException |java.io.UncheckedIOException ex) { Logger.getLogger(RecursiveFolderAndFiles.class.getName()) .log(Level.SEVERE, null, ex); } ``` How I can to do this?<issue_comment>username_1: Instead of piping further operations on the `br.lines()` you can collect to a list then utilize `IntStream.range` like so: ``` ... ... List resultSet = br.lines().collect(Collectors.toList()); IntStream.range(0, resultSet.size()) .filter(index -> resultSet.get(index).contains("SomeWord")) .forEach(index -> System.out.println("location:" + index)); ... ... ``` Now, you should have access to both the index and the lines. Upvotes: 1 <issue_comment>username_2: You don't necessarily need Stream for this or java-8... there is `LineNumberReader` that extends `BufferedReader`, so you could do: ``` LineNumberReader lnr = new LineNumberReader(r)) { String line; while ((line = lnr.readLine()) != null) { if(line.contains("SomeWord")) { System.out.println("location:" + lnr.getLineNumber()) } } ``` Upvotes: 1 [selected_answer]
2018/03/16
649
2,199
<issue_start>username_0: I am making a request to predicthq API ```js app.get("/test", (req, res) => { // [0] IS THE LONGITUDE request.get("https://api.predicthq.com/v1/events/?q=Minnesota&limit=10", { headers }, (err, data) => { var results = JSON.parse(data.body).results; var serverRes = []; for (var i in results) { getGeo(results[i].location[1], results[i].location[0]).then(v => { serverRes[i] = v; }) } //end of for loop res.send(serverRes) }); // end of request call }); //end of GET CALL //ASYNC FUNCTIOON function getGeo(lat, long) { return new Promise(resolve => { geocoder.reverse({ lat: lat, lon: long }, function(err, res) { resolve(res) }); }); } ``` ```html $(document).ready(function() { $("button").on("click", () => { console.log("Button Clicked"); $.get("/test", function(data, status){ console.log("data", data) }); });//end of button clicked }); ``` to get a list of events. When I get the response I want to convert the lat, long of the response to an address and put it in an array variable. When I get the response from the server it gives me a list of empty arrays, How do I make the for loop wait until the geocoder.reverse gets the data, then move on to the other lat, long<issue_comment>username_1: Use that `for` loop to put those promises into an array, then use `Promise.all` to wait for them to resolve. ``` var results = JSON.parse(data.body).results; var geoPromises = []; for (var i in results) { var promise = getGeo(results[i].location[1], results[i].location[0]); geoPromises.push(promise); } Promise.all(geoPromises).then(vs => { res.send(vs); }); ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Have you tried [async await](https://javascript.info/async-await)? Looking at it I think this could work for you. $(document).ready(function() { ``` $("button").on("click", async () => { console.log("Button Clicked"); await $.get("/test", function(data, status){ console.log("data", data) }); });//end of button clicked ``` }); Upvotes: 0
2018/03/16
3,568
14,083
<issue_start>username_0: some `GameObjects` in my scene implement the interace `ISaveable`. In my script, I want to find all these interfaces and store them. Later on I can loop through them and call their implemented method `SaveData()`. My current workaround to find these interfaces: ``` List saveables = new List(); MonoBehaviour[] sceneObjects = FindObjectsOfType(); for (int i = 0; i < sceneObjects.Length; i++) { MonoBehaviour currentObj = sceneObjects[i]; ISaveable currentComponent = currentObj.GetComponent(); if (currentComponent != null) { saveables.Add(currentComponent); } } ``` The code works fine but is there a better way? I don't want to search for each Monobehaviour in the scene and then try to get its interface component.<issue_comment>username_1: How about having global state, perhaps an list of `ISaveable`, and have each savable object add a reference to it during `start()`? Then you can simply iterate through the array, invoking `SaveData()` on each `ISaveable` reference? Upvotes: 1 <issue_comment>username_2: You could have a manager class holding a collection/list of `ISaveable`. You can then make that manager class a singleton by setting the Singleton value in its Awake method. ``` class SaveManager(){ public static SaveManager Instance; Awake(){ if (Instance == null) { Instance = this; } } List SaveAbles; public void AddSaveAble(ISaveAble saveAble){ //add it to the list. } } ``` Then in the Start method of class implementing the `ISaveable` interface you can use the Singleton to add it to the total list. ``` class Example : MonoBehaviour, ISaveable { void Start(){ SaveManager.Instance.AddSaveAble(this); } } ``` That way, each SaveAble adds itself to the manager via the managers method when it is created. Note that it is important that the Singleton is set in Awake so it can be used in Start, as Awake comes first in the lifecycle. Upvotes: 2 <issue_comment>username_3: Note that this solution is highly inefficient, but you could use LINQ to filter out the components that implement your interface. In this case `MyInterface` is the interface that we want to query for. ```cs MyInterface[] myInterfaces = GameObject.FindObjectsOfType() .OfType() .ToArray(); ``` Small note to clarify "inefficient": With Unity optimizing their object querying methods the call is not even that bad. The issue is that LINQ allocates a lot of GC, which could introduce frame drops, which is why you should *never* run these "heavy" operations on a per-frame basis. Running the query upon startup or when doing an obvious intensive operation (saving/loading) is totally fine. Upvotes: 0 <issue_comment>username_4: In my case I was using this implementation: ``` MonoBehaviour[] saveableScripts = (MonoBehaviour[])FindObjectsOfType(typeof(MonoBehaviour)); foreach (var enemy in saveableScripts) { if (enemy is ISaveable == false) continue; // you code ... saveables.Add(currentComponent); } ``` Upvotes: -1 <issue_comment>username_5: ### Simplified filter using Linq You can use [Linq `OfType`](https://learn.microsoft.com/dotnet/api/system.linq.enumerable.oftype) ``` ISaveable[] saveables = FindObjectsOfType(true).OfType().ToArray(); ``` of course it still requires to find all objects first. Note though that it underlies the general limitations of `FindObjectsOfType` regarding inactive GameObjects or disabled behaviors. (UPDATE: In newer Unity versions the optional `bool` parameter enables to include inactive/disabled components as well.) Also see [`Object.FindObjectsByType`](https://docs.unity3d.com/ScriptReference/Object.FindObjectsByType.html) nowadays (which still is limited to `UnityEngine.Object`) --- ### Iterate through the Hierachy You could extend it to go through all root level objects of the scene. This works since `GetComponentsInChildren` indeed **does** work also with interfaces! ``` var saveables = new List(); var rootObjs = SceneManager.GetActiveScene().GetRootGameObjects(); foreach(var root in rootObjs) { // Pass in "true" to include inactive and disabled children saveables.AddRange(root.GetComponentsInChildren(true)); } ``` If it's more efficient - yes, no, maybe, I don't know - but it includes also inactive and disabled objects. And yes one could extend that to iterate through multiple loaded scenes using ``` var saveables = new List(); for(var i = 0; i < SceneManager.sceneCount; i++) { var rootObjs = SceneManager.GetSceneAt(i).GetRootGameObjects(); foreach(var root in rootObjs) { saveables.AddRange(root.GetComponentsInChildren(true)); } } ``` --- ### Not use an interface in the first place This alternative is a bit similar to [this answer](https://stackoverflow.com/a/49330090/7111561) but it has a huge flaw: You would need a specific implementation for the interface in order to make it work which invalidates the whole idea of an interface. So the big question also from the comments there is: **Why have an interface at all?** If it is anyway only going to be used for `MonoBehaviour` you should rather have an `abstract class` like ``` public abstract class SaveableBehaviour : MonoBehaviour { // Inheritors have to implement this (just like with an interface) public abstract void SaveData(); } ``` This already solves the entire issue with using `FindObjectsOfType` anyway since now you could simply use ``` SaveableBehaviour[] saveables = FindObjectsOfType(); ``` but you can still go further: For even easier and more efficient access you can make them register themselves completely without the need of a manager or Singleton pattern! Why should a the type not simply handle its instances itself? ``` public abstract class SaveableBehaviour : MonoBehaviour { // Inheritors have to implement this (just like with an interface) public abstract void SaveData(); private static readonly HashSet instances = new HashSet(); // public read-only access to the instances by only providing a clone // of the HashSet so nobody can remove items from the outside public static HashSet Instances => new HashSet(instances); protected virtual void Awake() { // simply register yourself to the existing instances instances.Add(this); } protected virtual void OnDestroy() { // don't forget to also remove yourself at the end of your lifetime instances.Remove(this); } } ``` so you can then simply inherit ``` public class Example : SaveableBehaviour { public override void SaveData() { // ... } protected override void Awake() { base.Awake(); // Make sure to always keep that line // do additional stuff } } ``` and you could access all instances of that type via ``` HashSet saveables = SaveableBehaviour.Instances; foreach(var saveable in saveables) { saveable.SaveData(); } ``` Upvotes: 4 <issue_comment>username_6: One way is to use unity container (though slightly older way). Register all the dependencies to container through configuration. All the instances of the ISaveable can be registered to unit container and then resolve them in the code example: unity container in web.config: ``` ``` ISavable definition: ``` public interface ISavable { void SaveData(); } ``` ISavable implementation: ``` public class Example1 : ISavable { public void SaveData() { } } public class Example2 : ISavable { public void SaveData() { } } ``` Upvotes: -1 <issue_comment>username_7: I agree with the idea of using an `ISaveable` manager for this use case, as username_2 and others suggest. But to answer your question as asked, you could traverse the entire scene hierarchy, or multiple scenes' hierarchies, as follows: ``` public class Foo : MonoBehaviour { List saveables; void RefreshSaveables() { saveables = new List(); for (var i = 0; i < SceneManager.sceneCount; ++i) { var roots = SceneManager.GetSceneAt(i).GetRootGameObjects(); for (var j = 0; j < roots.Length; ++j) { saveables.AddRange(roots[j].GetComponentsInChildren()); } } } } ``` The advantage of this over your original code is that you don't need to save every MonoBehaviour in the scene in order to later filter it - however I still think it is preferable for each ISaveable to register itself on `Start` and let the manager call their `Save` methods. Upvotes: -1 <issue_comment>username_8: Here is an improve of your method to get all the MonoBehavior wiht your interface actually in your scenes. ``` void Start() { var myInterface = typeof(MyInterface); var monoBehavior = typeof(MonoBehaviour); var types = Assembly.GetExecutingAssembly() .GetTypes() .Where(p => myInterface.IsAssignableFrom(p) && monoBehavior.IsAssignableFrom(p)); var myInstance = new List(); foreach (var type in types) { myInstance.AddRange(FindObjectsOfType(type).Select(x => x as MyInterface)); } } ``` Here you don't go through all the GameObject in the scenes, but you do more FindObjectsOfType call, + you have to cast. But no more GetComponent needed. So depend on your case, but if you put the reflection part in cache (during a loading moment) it can avoid you to implement the Manager pattern and you can keep your interface architecture. Otherwise you can take a look at the observer pattern, it is a different way to manage call over different class type. <https://sourcemaking.com/design_patterns/observer> Upvotes: 0 <issue_comment>username_9: The typical real-world solution: ================================ ``` using System.Linq; public interface Automateme { public void Automate(); } ``` then ... ``` void _do() { Automateme[] aa = GameObject .FindObjectsOfType() .OfType() .Where(a => ((MonoBehaviour)a).enabled) .ToArray(); if (aa.Length == 0) { Debug.Log($"Automator .. none"); return; } Debug.Log($"Automator .. " + aa.Length); foreach (Automateme a in aa) { Debug.Log($"Automator {a.GetType()} {((MonoBehaviour)a).gameObject.name}"); } // do something to one of them randomly, for example! -> aa.AnyOne().Automate(); } ``` One point, it is all-but inevitable that you will need the object also as a MonoBehavior, so it's sensible (and no less/more inefficient) to use the `FindObjectsOfType()` idea first mentioned by @username_5 Some points * Of course, obviously, in some situations **you wouldn't at all do what is asked in this question**, you'd trivially "keep a list" of the items under consideration. (And that would vary greatly depending on the situation, do you only want active ones, do you want to include items on their first frame of bringup, etc etc etc). The entire point of the question is for the (many, typical) situations where you want to get them on the fly and caching is not relevant * Unity development, confusingly, **has a massive, just ridiculously big, domain range**. Many types of projects have perhaps a dozen game objects in the whole scene; other types of projects can have 10,000 game objects in the scene. Games with 2D physics are incredibly different from games with 3D physics, rendering approaches are incredibly different from shader approaches, and so on. Naturally with any "style" question like this if you are coming from a certain "domain" in your head, the answer can be quite different. * Regarding FindObjectsOfType and similar calls, it's worth remembering that **Unity efficiency has incredibly moved on from the early days**. It used to be that the "first thing you'd tell new Unity programmers" was "Don't do something every frame that searches the scene!!! OMG! You'll slow down to 1fps!" *Of course* that is still true in certain, perhaps many, domains. BUT it is (for a decade now) often just *completely wrong*. Unity's caching and handling of scene objects is now blisteringly fast (you can't write faster) and - as mentioned in the previous point - you're often just talking about a handful of items anyways. {A good example of the change in efficiency of the entire Unity paradigm is "pooling". (Any game programmers not as old as me can look up what that was :) ) In the early days of Unity you'd Pool! everything in a panic, My God, I may have *three* tanks at once, get a pooling library! Indeed I am so old I wrote one of the first popular Pooling articles that everyone used for awhile (long since deleted). Nowadays Unity's handling of instantiating prefabs or offscreen models is so fast and clever that you could more or less say they offer pooling built-in, and nowadays there's no need at all to pool things like everyday bullets from machine guns, etc. Anyway the point of this example is just that the (very old) "standard advice" about not searching the scene has to be understood more completely, these days.} Swinging back to the original question: > > I don't want to search for each Monobehaviour in the scene and then try to get its interface component. > > > The only way in a game engine to search each object in a scene, is to search each object in the scene. So there's no way around that. Of course, one can "keep a list" BUT it rather misses the two points (A) in some/many situations that's just not viable/sensible in a game architecture and (B) Unity *is already* incredibly good at keeping a list of game objects / components, it's the whole raison d'etre of a frame engine. (Note that if it's a physics game, the whole PhysX system for goodness sake is doing an astounding, cutting-edge job, of "finding things every frame", using mathematically freaky spatial hashing and more, dealing with orders of complexity that go with the square or worse of every physics poly in the scene! Compared to that it is misguided to worry about you pulling out 3 items from a list of 50 (which are flawlessly hashed and managed for efficiency by the frame engine anyway).) Upvotes: 2 <issue_comment>username_10: You can improve the code by replacing ```cs ISaveable currentComponent = currentObj.GetComponent(); ``` with ```cs ISaveable currentComponent = currentObj as ISaveable; ``` Upvotes: -1
2018/03/16
1,776
6,999
<issue_start>username_0: I'd like to alert on the lack of a heartbeat (or 0 bytes received) from any one of large number of Google IOT core devices. I can't seem to do this in Stackdriver. It instead appears to let me alert on the entire device registry which does not give me what I'm looking for (How would I know that a particular device is disconnected?) So how does one go about doing this?<issue_comment>username_1: I have no idea why this question was downvoted as 'too broad'. The truth is Google IOT doesn't have per device alerting, but instead offers only alerting on an entire device registry. If this is not true, please reply to this post. The page that clearly states this [is here](https://cloud.google.com/iot/docs/how-tos/monitoring): > > Cloud IoT Core exports usage metrics that can be monitored > programmatically or accessed via Stackdriver Monitoring. These metrics > are aggregated at the device registry level. You can use Stackdriver > to create dashboards or set up alerts. > > > The importance of having per device alerting is built into the promise assumed in [this statement](https://cloud.google.com/solutions/architecture/real-time-stream-processing-iot#stackdriver_monitoring_and_stackdriver_logging): > > Operational information about the health and functioning of devices is > important to ensure that your data-gathering fabric is healthy and > performing well. Devices might be located in harsh environments or in > hard-to-access locations. Monitoring operational intelligence for your > IoT devices is key to preserving the business-relevant data stream. > > > So its not easy today to get an alert if one among many, globally dispersed devices, loses connectivity. One needs to build that, and depending on what one is trying to do, it would entail different solutions. In my case I wanted to alert if the last heartbeat time or last event state publish was older than 5 minutes. For this I need to run a looping function that scans the device registry and performs this operation regularly. The usage of this API is outlined in this other SO post: [Google iot core connection status](https://stackoverflow.com/questions/48578216/google-iot-core-connection-status) Upvotes: 4 [selected_answer]<issue_comment>username_2: For reference, here's a Firebase function I just wrote to check a device's online status, probably needs some tweaks and further testing, but to help anybody else with something to start with: ``` // Example code to call this function // const checkDeviceOnline = functions.httpsCallable('checkDeviceOnline'); // Include 'current' key for 'current' online status to force update on db with delta // const isOnline = await checkDeviceOnline({ deviceID: 'XXXX', current: true }) export const checkDeviceOnline = functions.https.onCall(async (data, context) => { if (!context.auth) { throw new functions.https.HttpsError('failed-precondition', 'You must be logged in to call this function!'); } // deviceID is passed in deviceID object key const deviceID = data.deviceID const dbUpdate = (isOnline) => { if (('wasOnline' in data) && data.wasOnline !== isOnline) { db.collection("devices").doc(deviceID).update({ online: isOnline }) } return isOnline } const deviceLastSeen = () => { // We only want to use these to determine "latest seen timestamp" const stamps = ["lastHeartbeatTime", "lastEventTime", "lastStateTime", "lastConfigAckTime", "deviceAckTime"] return stamps.map(key => moment(data[key], "YYYY-MM-DDTHH:mm:ssZ").unix()).filter(epoch => !isNaN(epoch) && epoch > 0).sort().reverse().shift() } await dm.setAuth() const iotDevice: any = await dm.getDevice(deviceID) if (!iotDevice) { throw new functions.https.HttpsError('failed-get-device', 'Failed to get device!'); } console.log('iotDevice', iotDevice) // If there is no error status and there is last heartbeat time, assume device is online if (!iotDevice.lastErrorStatus && iotDevice.lastHeartbeatTime) { return dbUpdate(true) } // Add iotDevice.config.deviceAckTime to root of object // For some reason in all my tests, I NEVER receive anything on lastConfigAckTime, so this is my workaround if (iotDevice.config && iotDevice.config.deviceAckTime) iotDevice.deviceAckTime = iotDevice.config.deviceAckTime // If there is a last error status, let's make sure it's not a stale (old) one const lastSeenEpoch = deviceLastSeen() const errorEpoch = iotDevice.lastErrorTime ? moment(iotDevice.lastErrorTime, "YYYY-MM-DDTHH:mm:ssZ").unix() : false console.log('lastSeen:', lastSeenEpoch, 'errorEpoch:', errorEpoch) // Device should be online, the error timestamp is older than latest timestamp for heartbeat, state, etc if (lastSeenEpoch && errorEpoch && (lastSeenEpoch > errorEpoch)) { return dbUpdate(true) } // error status code 4 matches // lastErrorStatus.code = 4 // lastErrorStatus.message = mqtt: SERVER: The connection was closed because MQTT keep-alive check failed. // will also be 4 for other mqtt errors like command not sent (qos 1 not acknowledged, etc) if (iotDevice.lastErrorStatus && iotDevice.lastErrorStatus.code && iotDevice.lastErrorStatus.code === 4) { return dbUpdate(false) } return dbUpdate(false) }) ``` I also created a function to use with commands, to send a command to the device to check if it's online: ``` export const isDeviceOnline = functions.https.onCall(async (data, context) => { if (!context.auth) { throw new functions.https.HttpsError('failed-precondition', 'You must be logged in to call this function!'); } // deviceID is passed in deviceID object key const deviceID = data.deviceID await dm.setAuth() const dbUpdate = (isOnline) => { if (('wasOnline' in data) && data.wasOnline !== isOnline) { console.log( 'updating db', deviceID, isOnline ) db.collection("devices").doc(deviceID).update({ online: isOnline }) } else { console.log('NOT updating db', deviceID, isOnline) } return isOnline } try { await dm.sendCommand(deviceID, 'alive?', 'alive') console.log('Assuming device is online after succesful alive? command') return dbUpdate(true) } catch (error) { console.log("Unable to send alive? command", error) return dbUpdate(false) } }) ``` This also uses my version of a modified `DeviceManager`, you can find all the example code on this gist (to make sure using latest update, and keep post on here small): <https://gist.github.com/tripflex/3eff9c425f8b0c037c40f5744e46c319> All of this code, just to check if a device is online or not ... which could be easily handled by Google emitting some kind of event or adding an easy way to handle this. ***COME ON GOOGLE GET IT TOGETHER!*** Upvotes: 1
2018/03/16
810
3,168
<issue_start>username_0: In a VB.NET class file, I'm trying to use extension methods such as ToList(), where the generic parameter is populated with what I know to be the subclass, in this context. ``` Dim specificOrders = request.Orders _ .Where(Function(x) x.GetType().Equals(GetType(SpecificOrderType))) _ .ToList(Of SpecificOrderType)() ``` However, I'm getting the error message "extension method is not generic or has no type parameters available". Any ideas as to why this is? This method should be in the System.Linq namespace - I have it open and referenced in the file.<issue_comment>username_1: `ToList()` can not have type arguments, because that method is not generic. So just use `ToList()` ``` Dim suborders = orders _ .Where(Function(x) x.GetType().Equals(GetType(nonspecificOrder))) _ .ToList() ``` Try below example. ``` 'Order class Public Class order End Class 'specificOrder class Public Class specificOrder Inherits order End Class 'nonspecificOrder class Public Class nonspecificOrder Inherits order End Class ``` Usage: ``` Dim orders As List(Of order) = New List(Of [order]) Dim s1 As specificOrder = New specificOrder() Dim s2 As specificOrder = New specificOrder() Dim s3 As specificOrder = New specificOrder() Dim s4 As specificOrder = New specificOrder() Dim s5 As nonspecificOrder = New nonspecificOrder() Dim s6 As nonspecificOrder = New nonspecificOrder() orders.Add(DirectCast(s1, order)) orders.Add(DirectCast(s2, order)) orders.Add(DirectCast(s3, order)) orders.Add(DirectCast(s4, order)) orders.Add(DirectCast(s5, order)) orders.Add(DirectCast(s6, order)) Dim suborders = orders _ .Where(Function(x) x.GetType().Equals(GetType(nonspecificOrder))) _ .ToList() ``` Now the two "nonspecificOrder" type objects are returned. Upvotes: 0 <issue_comment>username_2: You don't use `Where` to filter by type. That's what the `OfType` method is for. It filters and casts: ``` Dim specificOrders = request.Orders. OfType(Of SpecificOrderType)(). ToList() ``` In that case, `OfType` returns an `IEnumerable(Of SpecificOrderType)` and calling `ToList` on that returns a `List(Of SpecificOrderType)`. That's how `ToList` works. It simply create a `List(Of T)` with the same generic type as the `IEnumerable(Of T)` that it's called on. If you were going to use `Where`, you would use `Cast` to go from the base type to `SpecificOrderType`: ``` Dim specificOrders = request.Orders. Where(Function(x) x.GetType().Equals(GetType(SpecificOrderType))). Cast(Of SpecificOrderType)(). ToList() ``` One point to note about `OfType` is that it will match any item that can be cast as the specified type. That is usually what you want and probably the result that your original code would produce but it's worth noting that any item that was of a type that inherited `SpecificOrderType` would be excluded by your original code but included using `OfType`. Upvotes: 3 [selected_answer]
2018/03/16
937
3,441
<issue_start>username_0: I defined a variable map my\_role in terraform and set its value in abc.tfvar file as follows. if I assign account id as actual value, it works, if I set account id as a variable, it does not work. Does it mean tfvar file only allow actual value, not variable? By the way, I use terraform workspace. Therefore my\_role is different based on workspace I select. The following works: ``` my_role = { dev = "arn:aws:iam::123456789012:role/myRole" test = ... prod = ... } ``` The following does not work: ``` my_role = { dev = "arn:aws:iam::${lookup(var.aws_account_id, terraform.workspace)}:role/myRole" test = ... prod = ... } ``` The following does not work either: ``` lambdarole = { dev = "arn:aws:iam::${data.aws_caller_identity.current.account_id}:role/myRole" test = ... prod = ... } ``` does<issue_comment>username_1: `ToList()` can not have type arguments, because that method is not generic. So just use `ToList()` ``` Dim suborders = orders _ .Where(Function(x) x.GetType().Equals(GetType(nonspecificOrder))) _ .ToList() ``` Try below example. ``` 'Order class Public Class order End Class 'specificOrder class Public Class specificOrder Inherits order End Class 'nonspecificOrder class Public Class nonspecificOrder Inherits order End Class ``` Usage: ``` Dim orders As List(Of order) = New List(Of [order]) Dim s1 As specificOrder = New specificOrder() Dim s2 As specificOrder = New specificOrder() Dim s3 As specificOrder = New specificOrder() Dim s4 As specificOrder = New specificOrder() Dim s5 As nonspecificOrder = New nonspecificOrder() Dim s6 As nonspecificOrder = New nonspecificOrder() orders.Add(DirectCast(s1, order)) orders.Add(DirectCast(s2, order)) orders.Add(DirectCast(s3, order)) orders.Add(DirectCast(s4, order)) orders.Add(DirectCast(s5, order)) orders.Add(DirectCast(s6, order)) Dim suborders = orders _ .Where(Function(x) x.GetType().Equals(GetType(nonspecificOrder))) _ .ToList() ``` Now the two "nonspecificOrder" type objects are returned. Upvotes: 0 <issue_comment>username_2: You don't use `Where` to filter by type. That's what the `OfType` method is for. It filters and casts: ``` Dim specificOrders = request.Orders. OfType(Of SpecificOrderType)(). ToList() ``` In that case, `OfType` returns an `IEnumerable(Of SpecificOrderType)` and calling `ToList` on that returns a `List(Of SpecificOrderType)`. That's how `ToList` works. It simply create a `List(Of T)` with the same generic type as the `IEnumerable(Of T)` that it's called on. If you were going to use `Where`, you would use `Cast` to go from the base type to `SpecificOrderType`: ``` Dim specificOrders = request.Orders. Where(Function(x) x.GetType().Equals(GetType(SpecificOrderType))). Cast(Of SpecificOrderType)(). ToList() ``` One point to note about `OfType` is that it will match any item that can be cast as the specified type. That is usually what you want and probably the result that your original code would produce but it's worth noting that any item that was of a type that inherited `SpecificOrderType` would be excluded by your original code but included using `OfType`. Upvotes: 3 [selected_answer]
2018/03/16
1,317
4,120
<issue_start>username_0: I have to write a program to create overflow and underflow for int-type variables and output the value of the variable in decimal and hex when overflow or underflow occurs. So here is my code: ``` #include #include int main() { int delta = INT\_MAX; printf("\n delta = %d (%4x)", delta, delta); delta = delta + 1; printf("\n delta +1 = %d (%4x)", delta, delta); delta = delta +2; printf("\n delta +2 = %d (%4x)", delta, delta); printf("\n"); int delta\_min = INT\_MIN; printf("\n delta\_min = %d (%4x)", delta\_min, delta\_min); delta\_min = delta\_min - 1; printf("\n delta\_min -1 = %d (%4x)", delta\_min, delta\_min); delta\_min = delta\_min -2; printf("\n delta\_min -2 = %d (%4x) \n\n", delta\_min, delta\_min); return 0; } ``` The answer is supposed to be like ``` delta = 2147483647 (7fffffff) delta +1 = -2147483648 (80000000) delta + 2 = -2147483647 (80000001) delta_min = -2147483648 (80000000) delta_min -1 = 2147483647 (7fffffff) delta_min -2 = 2147483646 (7ffffffe) ``` But my answer is coming out like ``` delta = 2147483647 (7fffffff) delta +1 = -2147483648 (80000000) delta + 2 = -2147483646 (80000002) << delta_min = -2147483648 (80000000) delta_min -1 = 2147483647 (7fffffff) delta_min -2 = 2147483645 (7ffffffd) << ``` I cannot figure out what am I doing wrong here. I'm not asking you to solve my homework. Just a hint would be extremely helpful! Thank you for taking your time to read it~<issue_comment>username_1: `ToList()` can not have type arguments, because that method is not generic. So just use `ToList()` ``` Dim suborders = orders _ .Where(Function(x) x.GetType().Equals(GetType(nonspecificOrder))) _ .ToList() ``` Try below example. ``` 'Order class Public Class order End Class 'specificOrder class Public Class specificOrder Inherits order End Class 'nonspecificOrder class Public Class nonspecificOrder Inherits order End Class ``` Usage: ``` Dim orders As List(Of order) = New List(Of [order]) Dim s1 As specificOrder = New specificOrder() Dim s2 As specificOrder = New specificOrder() Dim s3 As specificOrder = New specificOrder() Dim s4 As specificOrder = New specificOrder() Dim s5 As nonspecificOrder = New nonspecificOrder() Dim s6 As nonspecificOrder = New nonspecificOrder() orders.Add(DirectCast(s1, order)) orders.Add(DirectCast(s2, order)) orders.Add(DirectCast(s3, order)) orders.Add(DirectCast(s4, order)) orders.Add(DirectCast(s5, order)) orders.Add(DirectCast(s6, order)) Dim suborders = orders _ .Where(Function(x) x.GetType().Equals(GetType(nonspecificOrder))) _ .ToList() ``` Now the two "nonspecificOrder" type objects are returned. Upvotes: 0 <issue_comment>username_2: You don't use `Where` to filter by type. That's what the `OfType` method is for. It filters and casts: ``` Dim specificOrders = request.Orders. OfType(Of SpecificOrderType)(). ToList() ``` In that case, `OfType` returns an `IEnumerable(Of SpecificOrderType)` and calling `ToList` on that returns a `List(Of SpecificOrderType)`. That's how `ToList` works. It simply create a `List(Of T)` with the same generic type as the `IEnumerable(Of T)` that it's called on. If you were going to use `Where`, you would use `Cast` to go from the base type to `SpecificOrderType`: ``` Dim specificOrders = request.Orders. Where(Function(x) x.GetType().Equals(GetType(SpecificOrderType))). Cast(Of SpecificOrderType)(). ToList() ``` One point to note about `OfType` is that it will match any item that can be cast as the specified type. That is usually what you want and probably the result that your original code would produce but it's worth noting that any item that was of a type that inherited `SpecificOrderType` would be excluded by your original code but included using `OfType`. Upvotes: 3 [selected_answer]
2018/03/16
760
2,450
<issue_start>username_0: I am building Scigraph database on my local machine and trying to move this entire folder to docker and run it, when I run the shell script on my local machine it runs without error when I add the same folder inside docker and try to run it fails Am I doing this right way, here's my DOckerfile ``` FROM goyalzz/ubuntu-java-8-maven-docker-image ADD ./SciGraph /usr/share/SciGraph WORKDIR /usr/share/SciGraph/SciGraph-services RUN pwd EXPOSE 9000 CMD ['./run.sh'] ``` when I try to run it I'm getting this error ``` docker run -p9005:9000 test /bin/sh: 1: [./run.sh]: not found ``` if I run it using below command it works `docker run -p9005:9000 test -c "cd /usr/share/SciGraph/SciGraph-services && sh run.sh"` as I already marked the directory as WORKDIR and running the script inside docker using CMD it throws error<issue_comment>username_1: give / after SciGraph-services and change it to "sh run.sh" ................ and look into run.sh file permissions also Upvotes: 0 <issue_comment>username_2: It is likely that your `run.sh` doesn't have the `#!/bin/bash` header, so it cannot be executed only by running `./run.sh`. Nevertheless, always prefer to run scripts as `/bin/bash foo.sh` or `/bin/sh foo.sh` when in docker, especially because you don't know what changes files have been sourced in images downloaded from public repositories. So, your `CMD` statement would be: ``` CMD /bin/bash -c "/bin/bash run.sh" ``` Upvotes: 0 <issue_comment>username_3: You have to add the shell and the executable to the CMD array ... ``` CMD ["/bin/sh", "./run.sh"] ``` Upvotes: 0 <issue_comment>username_4: For scigraph as provided in their ReadMe, you can to run mvn install before you run their services. You can set your shell to bash and use a docker compose to run the docker image as shown below Dockerfile ``` FROM goyalzz/ubuntu-java-8-maven-docker-image ADD ./SciGraph /usr/share/SciGraph SHELL ["/bin/bash", "-c"] WORKDIR /usr/share/SciGraph RUN mvn -DskipTests -DskipITs -Dlicense.skip=true install RUN cd /usr/share/SciGraph/SciGraph-services && chmod a+x run.sh EXPOSE 9000 ``` build the scigraph docker image by running ``` docker build . -t scigraph_test ``` docker-compose.yml ``` version: '2' services: scigraph-server: image: scigraph_test working_dir: /usr/share/SciGraph/SciGraph-services command: bash run.sh ports: - 9000:9000 ``` Upvotes: 2 [selected_answer]
2018/03/16
1,475
4,892
<issue_start>username_0: We are in the process of remediation, re-engineering old JS web resources for latest D365 v9 sdk changes w.r.t Client scripting API improvements & deprecation. When rewriting the web api methods using `Xrm.WebApi`, we end up with this blocker. The scenario is setting `null` to lookup, and tried the below code: ``` var data = { "<EMAIL>": null }; Xrm.WebApi.updateRecord("abc_entity", abc_entityid, data).then(successCallback, errorCallback); ``` This is throwing error: > > "The 'odata.bind' instance or property annotation has a null value. In OData, the 'odata.bind' instance or property annotation must have a non-null string value." > > > The idea is to retire the below redundant XHR request code. But this is the only workaround we have now (referring [MSDN](https://learn.microsoft.com/en-us/powerapps/developer/common-data-service/webapi/associate-disassociate-entities-using-web-api#remove-a-reference-to-an-entity)). ``` var req = new XMLHttpRequest(); req.open("DELETE", Xrm.Utility.getGlobalContext().getClientUrl() + "/api/data/v9.0/accounts(recordGUID)/account_parent_account/$ref", true); req.setRequestHeader("Accept", "application/json"); req.setRequestHeader("Content-Type", "application/json; charset=utf-8"); req.setRequestHeader("OData-MaxVersion", "4.0"); req.setRequestHeader("OData-Version", "4.0"); req.onreadystatechange = function() { if (this.readyState === 4) { req.onreadystatechange = null; if (this.status === 204 || this.status === 1223) { //Success - No Return Data - Do Something } } }; req.send(); ``` Anybody faced this & handled it? Am I missing something?<issue_comment>username_1: To set null on the lookup use: ``` var data = { _[LookupFieldName]_value : null } Xrm.WebApi.updateRecord("abc_entity", abc_entityid, data).then(successCallback, errorCallback ``` For example to remove `contact.parentcustomerid` field value you need to use: ``` var data = {}; data._parentcustomerid_value = null var t = await Xrm.WebApi.updateRecord("contact", "{0200E6F5-1D21-E811-A954-0022480042B3}", data) ``` Upvotes: -1 <issue_comment>username_2: I just tried in v9.1.0.3832 `var data = { _[LookupFieldName]_value : null }` is working for me. ``` var data = { "statecode": 1, "*_myprefix_mylookupfieldname_value*": null } Xrm.WebApi.updateRecord("*entityName*", *recordId*, data); ``` Upvotes: -1 <issue_comment>username_3: You have to use Delete method to remove the lookup value format is as follows: /api/data/v8.0/accounts(1DD18913-11CB-E511-80D2-C4346BDC11C1)/primarycontactid/$ref Upvotes: 1 <issue_comment>username_4: Great news! You can set lookup field to `null` in `PATCH` request by adding this header to your request. ``` autodisassociate: true ``` And then you can use something like this to alter your lookup field in any way: ```cs SetLookupField(requestBody, "systemusers", "msdfm_MyUser", null) // Or SetLookupField(requestBody, "systemusers", "msdfm_MyUser", "f5b0b514-aea8-ea11-a812-000d3a569fe1") // ... private static void SetLookupField(JObject requestBody, string typePlural, string name, string value) { if (!string.IsNullOrEmpty(value)) { requestBody.Add(<EMAIL>", $"/{typePlural}({value})"); } else { requestBody.Add($"_{name.ToLower()}_value", null); } } ``` OP uses `XMLHttpRequest` anyway, so I thought, a way to do this using `PATCH` will be relevant here. Upvotes: -1 <issue_comment>username_5: you should try Xrm.WebApi.online.execute or Xrm.WebApi.online.executeMultiple ``` var Sdk = window.Sdk || {}; /** * Request to execute an update operation */ Sdk.UpdateRequest = function(entityName, entityId, payload) { this.etn = entityName; this.id = entityId; this.payload = payload; }; // NOTE: The getMetadata property should be attached to the function prototype instead of the // function object itself. Sdk.UpdateRequest.prototype.getMetadata = function () { return { boundParameter: null, parameterTypes: {}, operationType: 2, // This is a CRUD operation. Use '0' for actions and '1' for functions operationName: "Update", }; }; // Construct a request object from the metadata var payload = { "_abc_relatedentity_value": null }; var updateRequest = new Sdk.UpdateRequest("abc_entity", abc_entityid, payload); // Use the request object to execute the function Xrm.WebApi.online.execute(updateRequest).then( function (response) { console.log(response) }, function(error) { console.log(error.message); // handle error conditions } ); ``` Upvotes: 0 <issue_comment>username_6: I think set the column to null is enough, please make sure you removed '@odata.bind' var data = { "abc\_relatedentity": null }; This is working for me. Upvotes: 1
2018/03/16
706
2,978
<issue_start>username_0: I was wondering if someone could explain to me how the space complexity of both these algorithms work. I have done readings on it but they seem to be contradictive, if I understand correctly. I'm for example interested in how a linked list would affect the space complexity and this question says it makes it faster?; [Why is mergesort space complexity O(log(n)) with linked lists?](https://stackoverflow.com/questions/24171242/why-is-mergesort-space-complexity-ologn-with-linked-lists) This question however says it shouldn't matter; [Merge Sort Time and Space Complexity](https://stackoverflow.com/questions/10342890/merge-sort-time-and-space-complexity) Now I'm a bit new to programming and would like to understand the theory a bit better so dummie language would be appreciated.<issue_comment>username_1: The total space complexity of merge sort is `O(n)` since you have to store the elements somewhere. Nevertheless, there can indeed be a difference in *additional* space complexity, between an array implementation and a linked-list implementation. Note that you can implement an iterative version that only requires `O(1)` additional space. However, if I remember correclty, this version would perform horribly. In the conventional recursive version, you need to account for the stack frames. That alone gives a `O(log n)` additional space requirement. In a linked-list implementation, you can perform merges in-place without any auxiliary memory. Hence the `O(log n)` additional space complexity. In an array implementation, merges require auxiliary memory (likely an auxiliary array), and the last merge requires the same amount of memory as that used to store the elements in the first place. Hence the `O(n)` additional space complexity. Keep in mind that space complexity tells you how the space needs of the algorithm grows as the input size grows. There are details that space complexity ignores. Namely, the sizes of a stack frame and an element are probably different, and a linked-list takes up more space than an array because of the links (the references). That last detail is important for small elements, since the additional space requirement of the array implementation is likely less than the additional space taken by the links of the linked-list implementation. Upvotes: 2 <issue_comment>username_2: > > Why is merge sort space complexity O(log(n)) with linked lists? > > > This is only true for top down merge sort for linked lists, where O(log2(n)) stack space is used due to recursion. For bottom up merge sort for linked lists, space complexity is O(1) (constant space). One example of an optimized bottom up merge sort for a linked list uses a small (26 to 32) array of pointers or references to to the first nodes of list. This would still be considered O(1) space complexity. Link to pseudo code in wiki article: <https://en.wikipedia.org/wiki/Merge_sort#Bottom-up_implementation_using_lists> Upvotes: 0
2018/03/16
638
2,132
<issue_start>username_0: ``` a b c d e ``` Naturally, when I press a, b, c, d, or e, this jumps to the option with that as the first letter. I need to disable this, preferably not through Javascript (not required). Is there really no way to turn off this keypress jump to letter option?<issue_comment>username_1: You cannot do it without JavaScript. With JavaScript you will need to add an event handler for `keypress` event, to prevent default behavior and to stop propagation. With jQuery: ``` $("select").on("keypress", function (event) { event.preventDefault(); event.stopPropagation(); }); ``` Take care that `$("select")` will select all your select inputs and you'll end up adding this `keypress` for all of them. Without jQuery: ``` a b c d e var select = document.getElementById('mySelect'); select.addEventListener ("keypress", function (event) { event.preventDefault(); event.stopPropagation(); }); ``` Working fiddle: <https://jsfiddle.net/ep3xfyc8/9/> **Later edit: There exists a solution to achieve this without JavaScript. See username_4's response below.** Upvotes: 3 [selected_answer]<issue_comment>username_2: Copied from: <https://stackoverflow.com/a/1227324> ``` function IgnoreAlpha(e) { if (!e) { e = window.event ; } if (e.keyCode >= 65 && e.keyCode <= 90) // A to Z { e.returnValue=false; e.cancel = true; } } A B C ``` Upvotes: -1 <issue_comment>username_3: This is an adaption of what @razvan-dimitru proposes but keeping JS to a minimum. In a way, you only need markup for this solution: ```html a b c d e ``` Upvotes: 0 <issue_comment>username_4: You *can* do it **without** JavaScript. Prefix every option with a [Zero-Width Non Joiner](http://www.fileformat.info/info/unicode/char/200c/index.htm) character. ```html Select: ‌a ‌b ‌c ‌d ‌e ``` If this is going to be submitted in a HTTP request, the `‌` character should be ignored by the browser. Every option should have its own `value` attribute for form submission purposes, anyway, and you would not need to put the `‌` character into the option values. Upvotes: 2
2018/03/16
421
1,314
<issue_start>username_0: ``` #include using namespace std; // Function to copy one string in to other // using recursion void myCopy(char str1[], char str2[], int index = 0) { // copying each character from s1 to s2 s2[index] = s1[index]; // if string reach to end then stop if (s1[index] == '\0') return; // increase character index by one myCopy(s1, s2, index + 1); } // Driver function int main() { char s1[100] = "GEEKSFORGEEKS"; char s2[100] = ""; myCopy(s1, s2); cout << s2; return 0; } ``` I did not understand how the value of s2 is getting printed ....since we passed address of s1 and s2 to function mycopy(). mycopy() has two local array str1 and str2 as argument,so i was thinking two local array with values of s1 and s2 will be created.(call by values) Shouldn't the function prototype be mycopy(char \*s1,char \*s2) for printing s2.(call by reference)<issue_comment>username_1: To be able to reinitialize your parameter (array `s2` in this case) you need to **pass it by reference**. Like this: ``` void myCopy(char &str1[], char &str2[], int index = 0) ``` This means that `myCopy()` will use arrays as they are, otherwise it will create local duplication of them. Upvotes: -1 <issue_comment>username_2: void myCopy(char &str1[], char &str2[], int index = 0) Upvotes: 0
2018/03/16
1,272
3,952
<issue_start>username_0: I'm trying to create a horizontally scrollable div with flexbox. So far, I have most of it. However, the only problem I am facing is that I'm trying to add space to my items, but for some reason, nothing seems to be working. I've tried adding margin, padding, justifying content, etc. Here's a [jsfiddle](https://jsfiddle.net/u754wrmz/) of what I'm trying to achieve. ```css .grid { display: -webkit-flex; display: -ms-flexbox; display: flex; margin-bottom: 20px; justify-content: space-between; } /*Each item is one column*/ .item { width: 50%; } .article-scroll-mobile { box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2); flex-wrap: nowrap; text-align: center; overflow-x: auto; -webkit-overflow-scrolling: touch; /*For iOS smooth scroll effect*/ } ``` ```html ![](https://www.w3schools.com/howto/img_fjords.jpg) ![](https://www.w3schools.com/howto/img_fjords.jpg) ![](https://www.w3schools.com/howto/img_fjords.jpg) ![](https://www.w3schools.com/howto/img_fjords.jpg) ![](https://www.w3schools.com/howto/img_fjords.jpg) ![](https://www.w3schools.com/howto/img_fjords.jpg) ```<issue_comment>username_1: There a few things you have to consider. First of all; with `justify-content` you define how remaining space is handled. By using `space-between` your items will be aligned so that the space between them is equal, by setting it to `center` the remaining space will be around all items, with all items stuck together. In your case though, there is no remaining space, because your items actually stretch the div. So that doesn't help you. Next; you've set the width of an item to `50%`. Which is fine, your `item`'s will be 50% of the viewport. That's because your grid will implicitly be 100% of the viewport. But because your image overflows the box, you can set margins if you want, and they will put the items further apart, but you need big-ass margins to actually see them. Bigger then the overflowing of your image. So, to fix this, you make the images responsive by making them as width as the item; ``` .item img { display: block; height: auto; width: 100%; } ``` But that poses another problem; flexbox tries to size it's flex items to fit it all into the flex container. So you'll see that it automatically resizes your items so they will all fit in. To fix this, you have to explicitly force the width of your items; ``` .item { flex: 0 0 50%; } ``` Which is a shorthand for; ``` .item { flex-grow: 0; flex-shrink: 0; flex-basis: 50%; } ``` So basically you say; make my item 50% of it's container, and don't use your awesome algorithm to try to make it bigger or smaller. Now you've got what you want, and you can use `margin-right: 20px` for example to create a 20px space between your items. Full snippet; ```css .grid { display: flex; width: 100%; } .item { flex: 0 0 50%; margin-right: 20px; } .item img { display: block; height: auto; width: 100%; } .article-scroll-mobile { box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2); flex-wrap: nowrap; text-align: center; overflow-x: auto; -webkit-overflow-scrolling: touch; /*For iOS smooth scroll effect*/ } ``` ```html ![](https://www.w3schools.com/howto/img_fjords.jpg) ![](https://www.w3schools.com/howto/img_fjords.jpg) ![](https://www.w3schools.com/howto/img_fjords.jpg) ![](https://www.w3schools.com/howto/img_fjords.jpg) ![](https://www.w3schools.com/howto/img_fjords.jpg) ![](https://www.w3schools.com/howto/img_fjords.jpg) ``` Upvotes: 6 [selected_answer]<issue_comment>username_2: There is [`column-gap`](https://developer.mozilla.org/en-US/docs/Web/CSS/column-gap) to adjust row gaps between elements. ``` .form-container { display: flex; flex-wrap: wrap; justify-content: space-between; align-items: stretch; column-gap: 0.875rem; } ``` Upvotes: 4
2018/03/16
1,759
6,460
<issue_start>username_0: I am debugging an Android app that usually crashes when I plug into an Arduino it is supposed to be communicating with. Consequently, I need to get a wireless connection to the device's logcat, which is what brought me to [ADB over TPC](https://developer.android.com/studio/command-line/adb.html#wireless) as seen on the Android developers webpage. The recommended process is: ``` C:\Users\User> cd AppData\Local\Android\sdk\platform-tools C:\Users\User\AppData\Local\Android\sdk\platform-tools> adb kill-server C:\Users\User\AppData\Local\Android\sdk\platform-tools> adb start-server * daemon not running; starting now at tcp:5037 * daemon started successfully C:\Users\User\AppData\Local\Android\sdk\platform-tools> adb tcpip 5555 restarting in TCP mode port: 5555 C:\Users\User\AppData\Local\Android\sdk\platform-tools> adb connect 10.20.5.160:5555 ``` However, the typical response is: ``` unable to connect to 10.20.5.160:5555: cannot connect to 10.20.5.160:5555: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. (10060) ``` I could not consistently get a connection between a PC and mobile device. Over the course of three weeks using several networks and devices, I could only connect two times out of an agonizing many attempts. I have dug deep into the posted questions on stackoverflow and tried literally every recommended alteration to getting an adb connection, from using an Android Studio plugin to forwarding ports to trying several other ports to toggling my device's internet. I still could not get a connection more than a fraction of a percent of the time. When I look at the network state of my device after `adb tpip` , I find no evidence of the device trying to look for a connection: ``` C:\Users\User\AppData\Local\Android\sdk\platform-tools> adb shell j3ltevzw:/ $ netstat -n Active Internet connections (w/o servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 fc00:db20:35b:7399::5:10.20.5.160:4214 fc00:db20:35b:7399::5:192.168.127.12: ESTABLISHED udp 4288 0 adb shell ip -f inet addr show wlan0:68 10.20.5.1:67 ESTABLISHED Active UNIX domain sockets (w/o servers) Proto RefCnt Flags Type State I-Node Path ... ... ... ... ... ... ... ... ... ... ... ... ``` Also, when I try to ping my device, I usually get: ``` C:\Users\Benjamin\AppData\Local\Android\sdk\platform-tools> ping 10.20.5.160 Pinging 10.20.5.160 with 32 bytes of data: Reply from 10.20.5.232: Destination host unreachable. Reply from 10.20.5.232: Destination host unreachable. Reply from 10.20.5.232: Destination host unreachable. Reply from 10.20.5.232: Destination host unreachable. Ping statistics for 10.20.5.160: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss) ``` What I am looking for is a process or file that can be run and consistently open a tcp adb connection, or at least explain what about my setup is invalid. Ideas? **Update 1:** I have started a bounty on this question. I feel like the problem with the ADB software used to make connection for debugging is two fold: 1. `adb tcpip` does not always invoke the mobile device to listen on the specified port 2. `adb connect :`, I am assuming, is not given permission by the network to connect; I am not a networking expert, but the fact that pinging a device returns "destination host unreachable" seems a little off I will award the first person to find a successful solution. Seeing that having an ADB USB connection gives a user access to both the mobile device's and host's terminal, there really is no excuse for a connection not to be able to be made between the two. **Update 2:** I have recently had more success in connecting to the ADB as well as receiving successful pings at my school network. Note at this network connects are insecure and the host and mobile devices where on separate subnetworks. Any idea why these conditions would work over my secure network at home that uses only one subnet?<issue_comment>username_1: Try starting from scratch and setup ADB-over-tcp following steps below: 1. Turn off USB debugging on your device and turn it on again (just to reset). 2. Kill the ADB server on your PC using `adb kill-server` 3. Connect your device to the PC using the USB cable and type `adb devices`. This will automatically restart the ABD server again. 4. Enter `adb tcpip 5555` on your PC terminal. This will switch the `adbd` daemon on your device to tcp-ip mode. 5. Disconnect your device from USB. Now connect your device to the *same wireless network* as your PC, either the same wi-fi or use your phone as a *hotspot*. 6. Determine your phone's ip. You can do this in the wi-fi settings on your phone. If you are using your phone as hotspot, then generally your ip should be `192.168.43.1` (not required but most of the time it's the same). 7. Connect to ADB via tcp using `adb connect` This should hopefully connect your device to your pc via tcp-ip. Upvotes: 4 [selected_answer]<issue_comment>username_2: I found a solution that works for me, but I feel it shouldn't necessary and it may not applicable for everybody else with these issues, so its still not quite what I was aiming for. Anyways, after seeing **username_1** suggest what an IP address I should be for using a mobile hotstop through a phone, I realized any network complications would be solved if I was the network, so I connected my mobile device to the mobile hotspot on my laptop. Instant ping responses and immediate ADB connection with the popularly suggested commands. Upvotes: 1 <issue_comment>username_3: Just found myself in the same situation. Had no problem yesterday but today couldn't connect. Solution was simple, I realized that today I'm connected to VPN which obviously makes me belonging to a different network. So, disconnecting and connection to the same Wi-Fi solved the trouble. Upvotes: 1 <issue_comment>username_4: I had a similar problem. IF I was wirelessly connected via ADB to my phone, when I started my VPN with the killswitch setting enabled, the ADB program crashed. By deactivating the killswitch setting and then restarting my computer, I can use the VPN and the ADB at the same time without any problem. Upvotes: 0
2018/03/16
802
3,234
<issue_start>username_0: For purposes of our integration checking, I want to count the number of messages on an Azure queue. The method looks like this: ``` internal void VerifyMessagesOnQueue(string queueNameKey, int expectedNumberOfMessages) { var azureStorageConnectionKey = ConfigurationManager.AppSettings["AzureStorageConnectionKey"]; var storageAccount = CloudStorageAccount.Parse(azureStorageConnectionKey); var queueClient = storageAccount.CreateCloudQueueClient(); var queue = queueClient.GetQueueReference(ConfigurationManager.AppSettings[queueNameKey]); var messages = queue.PeekMessages(int.MaxValue); messages.Count().Should().Be(expectedNumberOfMessages); } ``` Right now I'm using `var messages = queue.PeekMessages(int.MaxValue);` to try to get all the messages on a queue. It returns an HTML repsonse 400. I have tried `var messages = queue.PeekMessages(expectedNumberOfMessages);`, but when `expectedNumberOfMessages` is 0, I also get an HTML response 400. How can I reliably check the number of messages on an Azure queue without disrupting it (this is why I was using `.PeekMessage`)?<issue_comment>username_1: > > I want to count the number of messages on an Azure queue > > > I suggest you could try the following code to achieve your goal. I have created a console project to test. StorageConnectionString in App.config: ```html ``` Code in Program.cs: ``` static void Main(string[] args) { string Queue_Name = "myqueue"; CloudStorageAccount storageAccount = CloudStorageAccount.Parse( Microsoft.Azure.CloudConfigurationManager.GetSetting("StorageConnectionString")); CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient(); CloudQueue queue = queueClient.GetQueueReference(Queue_Name); queue.FetchAttributes(); var count=queue.ApproximateMessageCount; Console.WriteLine("message number in queue:"+count); } ``` The Result about queue count: [![enter image description here](https://i.stack.imgur.com/0kgdw.png)](https://i.stack.imgur.com/0kgdw.png) Upvotes: 3 [selected_answer]<issue_comment>username_2: username_1's answer is correct. To get approximate messages count in a queue, you will need to fetch queue's attribute. Adding a new answer to clarify the following: > > Right now I'm using var messages = queue.PeekMessages(int.MaxValue); > to try to get all the messages on a queue. It returns an HTML repsonse > 400. I have tried var messages = queue.PeekMessages(expectedNumberOfMessages);, but when > expectedNumberOfMessages is 0, I also get an HTML response 400. > > > Essentially `PeekMessages` is used to retrieve the messages from the top of the queue without altering the retrieved messages visibility. Maximum messages that can be fetched from a queue in a single request is 32 and the minimum is 1. Please check this [`link`](https://learn.microsoft.com/en-us/rest/api/storageservices/peek-messages) (URI Parameters section ) for more details. In both of scenarios, you're specifying a count that is out of allowed range (1 - 32) and this is why you're getting the `400` error back from the queue. Upvotes: 2
2018/03/16
712
2,713
<issue_start>username_0: I feel like this should be easy but I've been running into problems finding a solution. I have an array of file extensions: ``` var items = ["PDF", "XLS"] ``` I also have an array of file objects: ``` files = [{format: "TXT"},{format: "PDF"}] ``` I'd like a function where I can pass the array of file objects, and if any of the files have a format that is in the items array, it should return true. Thank you in advance for the help. ``` function hasItemExtension() { files.forEach(file => { if(items.indexOf(file.format) != -1) { return true; } } }; ```<issue_comment>username_1: > > I want to count the number of messages on an Azure queue > > > I suggest you could try the following code to achieve your goal. I have created a console project to test. StorageConnectionString in App.config: ```html ``` Code in Program.cs: ``` static void Main(string[] args) { string Queue_Name = "myqueue"; CloudStorageAccount storageAccount = CloudStorageAccount.Parse( Microsoft.Azure.CloudConfigurationManager.GetSetting("StorageConnectionString")); CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient(); CloudQueue queue = queueClient.GetQueueReference(Queue_Name); queue.FetchAttributes(); var count=queue.ApproximateMessageCount; Console.WriteLine("message number in queue:"+count); } ``` The Result about queue count: [![enter image description here](https://i.stack.imgur.com/0kgdw.png)](https://i.stack.imgur.com/0kgdw.png) Upvotes: 3 [selected_answer]<issue_comment>username_2: username_1's answer is correct. To get approximate messages count in a queue, you will need to fetch queue's attribute. Adding a new answer to clarify the following: > > Right now I'm using var messages = queue.PeekMessages(int.MaxValue); > to try to get all the messages on a queue. It returns an HTML repsonse > 400. I have tried var messages = queue.PeekMessages(expectedNumberOfMessages);, but when > expectedNumberOfMessages is 0, I also get an HTML response 400. > > > Essentially `PeekMessages` is used to retrieve the messages from the top of the queue without altering the retrieved messages visibility. Maximum messages that can be fetched from a queue in a single request is 32 and the minimum is 1. Please check this [`link`](https://learn.microsoft.com/en-us/rest/api/storageservices/peek-messages) (URI Parameters section ) for more details. In both of scenarios, you're specifying a count that is out of allowed range (1 - 32) and this is why you're getting the `400` error back from the queue. Upvotes: 2
2018/03/16
3,380
9,246
<issue_start>username_0: I am trying to get my three CTEs to join and plus one more table . Below is what I am doing but I am missing something . I have looked up same error and I am being told I am missing "," but I did that and still the same error. The error is "Invalid Table Name". Any help is much appreciated as always . Side note - I did run each CTE and each one ran perfectly by themselves and as single CTE, its when I try and put them together I get the error . ``` --------1. CTE WITH Memb AS ( Select * From icue.HSC_MBR_COV mbc Where mbc.POL_ISS_ST_CD = 'PA' And (mbc.LOB_TYP_ID = '12' OR mbc.CLM_PLTFM_ID = 'A9') Union Select * From icue.HSC_MBR_COV mbc Where mbc.POL_ISS_ST_CD = 'NJ' And (mbc.LOB_TYP_ID = '12' OR mbc.CLM_PLTFM_ID = 'A9') ), --select * from Memb ----------- 2.CTE Fax Flag With Fax_Flag AS( SELECT Distinct cmt.CNTC_NM As Provider_Name, cmt.FAX_NBR, CASE When cmt.FAX_NBR ='201-553-7889' THEN 'Yes' ELSE 'No' END AS Fax_Flag From icue.cmnct_trans cmt Where cmt.CNTC_NM ='CHILDRENS HOSP PHILADELPHIA' ), --Select * From Fax_Flag ------------3. CTE Letter Flag With Letter_Sent AS( Select Distinct act.Actv_strt_dttm AS Fax_Date, act.ACTV_TYP_ID, CASE When ACTV_TYP_ID ='5' THEN 'Yes' ELSE 'No' END AS Letter_Flag From icue.actv act Where trunc(act.actv_strt_dttm) between to_date('19-FEB-2018','DD-MON-YYYY') and to_date('06-MAR-2018','DD-MON-YYYY') ), Select * From Letter_sent ---------FINAL - I want to put all the CTE together Select Distinct mb.Fst_Nm AS First_Name, mb.Lst_Nm AS Last Name, mbc.Member_Policy_State, mbc.mbr_id AS Memeber_ID, cmt.HSC_ID AS Auth_Number, cmt.Fax_Number, cmt.Provider_Name, act.Fax_Date, act.Letter_Flag, cmt.Fax_Flag From icue.mbr mb Inner Join Memb mbc --- CTE 1 On mb.Mbr_ID = mbc.Mbr_ID Inner Join Fax_Flag cmt ---- CTE 2 On cmt.HSC_ID = cmt.HSC_ID Inner Join Letter_Sent act ---- CTE 3 On act.Mbr_ID = mbc.Mbr_ID ``` update \*\*\* I did try and join the two CTEs below, but I keep getting an "Table/view does not exists" error ``` --------1. CTE Bring in only PA and NJ market With Memb AS ( Select Distinct mbc.Hsc_Id AS Auth_Number, mbc.POL_ISS_ST_CD AS Policy_State From icue.HSC_MBR_COV mbc Where mbc.POL_ISS_ST_CD = 'PA' And (mbc.LOB_TYP_ID = '12' OR mbc.CLM_PLTFM_ID = 'A9') Union Select Distinct mbc.hsc_id AS Auth_Number, mbc.POL_ISS_ST_CD AS Policy_State From icue.HSC_MBR_COV mbc Where mbc.POL_ISS_ST_CD = 'NJ' And (mbc.LOB_TYP_ID = '12' OR mbc.CLM_PLTFM_ID = 'A9') ), ---select * from Memb ----------- 2.CTE Fax Flag Fax_Flag AS( SELECT Distinct cmt.CNTC_NM As Provider_Name, cmt.FAX_NBR AS Fax_Number, cmt.HSC_ID AS Auth_Number, CASE When cmt.FAX_NBR ='201-553-7889' THEN 'Yes' ELSE 'No' END AS Fax_Flag From icue.cmnct_trans cmt Left Join Memb mbc On cmt.hsc_id = mbc.Hsc_Id And cmt.CNTC_NM ='CHILDRENS HOSP PHILADELPHIA' ) Select * From Fax_Flag ```<issue_comment>username_1: Try removing the 2nd and 3rd `WITH` and the comma at the end of the 3rd CTE ``` --------1. CTE WITH Memb AS ( Select * From icue.HSC_MBR_COV mbc Where mbc.POL_ISS_ST_CD = 'PA' And (mbc.LOB_TYP_ID = '12' OR mbc.CLM_PLTFM_ID = 'A9') Union Select * From icue.HSC_MBR_COV mbc Where mbc.POL_ISS_ST_CD = 'NJ' And (mbc.LOB_TYP_ID = '12' OR mbc.CLM_PLTFM_ID = 'A9') ), --select * from Memb ----------- 2.CTE Fax Flag --- With Fax_Flag AS( SELECT Distinct cmt.CNTC_NM As Provider_Name, cmt.FAX_NBR, CASE When cmt.FAX_NBR ='201-553-7889' THEN 'Yes' ELSE 'No' END AS Fax_Flag From icue.cmnct_trans cmt Where cmt.CNTC_NM ='CHILDRENS HOSP PHILADELPHIA' ), --Select * From Fax_Flag ------------3. CTE Letter Flag --- With Letter_Sent AS( Select Distinct act.Actv_strt_dttm AS Fax_Date, act.ACTV_TYP_ID, CASE When ACTV_TYP_ID ='5' THEN 'Yes' ELSE 'No' END AS Letter_Flag From icue.actv act Where trunc(act.actv_strt_dttm) between to_date('19-FEB-2018','DD-MON-YYYY') and to_date('06-MAR-2018','DD-MON-YYYY') ) -- , Removed the comma ---------FINAL - I want to put all the CTE together Select Distinct mb.Fst_Nm AS First_Name, mb.Lst_Nm AS Last Name, mbc.Member_Policy_State, mbc.mbr_id AS Memeber_ID, cmt.HSC_ID AS Auth_Number, cmt.Fax_Number, cmt.Provider_Name, act.Fax_Date, act.Letter_Flag, cmt.Fax_Flag From icue.mbr mb Inner Join Memb mbc --- CTE 1 On mb.Mbr_ID = mbc.Mbr_ID Inner Join Fax_Flag cmt ---- CTE 2 On cmt.HSC_ID = cmt.HSC_ID Inner Join Letter_Sent act ---- CTE 3 On act.Mbr_ID = mbc.Mbr_ID ``` Upvotes: 2 <issue_comment>username_2: First of all, it's easier to read code if you lay it out neatly. When I do that I get this: ``` with memb as ( select distinct mbc.hsc_id as auth_number , mbc.pol_iss_st_cd as policy_state from icue.hsc_mbr_cov mbc where mbc.pol_iss_st_cd = 'PA' and (mbc.lob_typ_id = '12' or mbc.clm_pltfm_id = 'A9') union select distinct mbc.hsc_id as auth_number , mbc.pol_iss_st_cd as policy_state from icue.hsc_mbr_cov mbc where mbc.pol_iss_st_cd = 'NJ' and (mbc.lob_typ_id = '12' or mbc.clm_pltfm_id = 'A9') ) , fax_flag as ( select distinct cmt.cntc_nm as provider_name , cmt.fax_nbr as fax_number , cmt.hsc_id as auth_number , case when cmt.fax_nbr = '201-553-7889' then 'Yes' else 'No' end as fax_flag from icue.cmnct_trans cmt left join memb mbc on mbc.hsc_id = cmt.hsc_id and cmt.cntc_nm = 'CHILDRENS HOSP PHILADELPHIA' ) select * from fax_flag; ``` That `union` looks odd to me, as the two queries are the same apart from `pol_iss_st_cd`, so why not simply: ``` with memb as ( select distinct mbc.hsc_id as auth_number , mbc.pol_iss_st_cd as policy_state from icue.hsc_mbr_cov mbc where mbc.pol_iss_st_cd in ('PA','NJ') and (mbc.lob_typ_id = '12' or mbc.clm_pltfm_id = 'A9') ) , fax_flag as ( select distinct cmt.cntc_nm as provider_name , cmt.fax_nbr as fax_number , cmt.hsc_id as auth_number , case when cmt.fax_nbr = '201-553-7889' then 'Yes' else 'No' end as fax_flag from icue.cmnct_trans cmt left join memb mbc on cmt.hsc_id = mbc.hsc_id and cmt.cntc_nm = 'CHILDRENS HOSP PHILADELPHIA' ) select * from fax_flag; ``` That gives me *ORA-00942: table or view does not exist* because I don't have icue.hsc\_mbr\_cov, but otherwise it looks fine. If you are getting that error, which table is it pointing to and can you query it on its own (just to take `WITH` clause syntax out of the equation). If this is about the syntax in general, though, we can simplify it to a generic test case: ``` with demo1 (id) as ( select 'X' from dual ) , demo2 (id) as ( select 'X' from dual ) , demo3 (id) as ( select 'X' from dual ) select d.dummy from dual d join demo1 d1 on d1.id = d.dummy join demo2 d2 on d2.id = d1.id join demo3 d3 on d3.id = d2.id where d.dummy = 'X'; ``` Upvotes: 0 <issue_comment>username_3: I figured out what I was doing wrong . I was Referencing a column with out Referencing its Alias, since I defined it in the first CTE . Code is below ``` --------1. CTE Bring in only PA and NJ market With Memb AS ( Select Distinct mbc.Hsc_Id AS Auth_Number, mbc.POL_ISS_ST_CD AS Policy_State From icue.HSC_MBR_COV mbc Where mbc.POL_ISS_ST_CD = 'PA' And (mbc.LOB_TYP_ID = '12' OR mbc.CLM_PLTFM_ID = 'A9') Union Select Distinct mbc.hsc_id AS Auth_Number, mbc.POL_ISS_ST_CD AS Policy_State From icue.HSC_MBR_COV mbc Where mbc.POL_ISS_ST_CD = 'NJ' And (mbc.LOB_TYP_ID = '12' OR mbc.CLM_PLTFM_ID = 'A9') ), ---select * from Memb ----------- 2.CTE Fax Flag Fax_Flag AS( SELECT Distinct cmt.CNTC_NM As Provider_Name, cmt.FAX_NBR AS Fax_Number, --cmt.HSC_ID AS Auth_Number, CASE When cmt.FAX_NBR ='201-553-7889' THEN 'Yes' ELSE 'No' END AS Fax_Flag From Memb mbc Left Join icue.cmnct_trans cmt On mbc.Auth_Number = cmt.hsc_id ----- needed to update mbc.HSC_ID to mbc.auth_number And cmt.CNTC_NM ='CHILDRENS HOSP PHILADELPHIA' ) Select * From Fax_Flag ``` Upvotes: 0
2018/03/16
1,417
5,152
<issue_start>username_0: i'm struggling with declaring and accessing data in nested arrays in Javascript i can easily do it in C using nested structures, declaring first the lowest-level structure and including an array of it when declaring the upper-level structure and so on. I end up with a structure containing an array of structures, each containing an array of structures, etc... but i have only 3 days of Javascript experience so far... Just to help you understand how i need the data to be organized, here's an example: Let's imagine a library, this library has several floors, each floor has the same type of properties (name, number of books...), each floor has several departments and each has the same type of properties, each department has several shelves and so on... let's say the 1st floor get's its name from some famous mathematician, and is split into two departments: 1/ arithmetics and 2/ geometry the ideal for me would be to work with the data this way: ``` library.floor[0].name = 'Fermat' library.floor[0].department[0].name = 'arithmetics' library.floor[0].department[1].name = 'geometry' library.floor[0].department[1].shelve[4].authors = // authors list ``` so far i've tried this: ``` var library = { floors: [ {floor_name:'Fermat'}, {has_a_printing_machine:true}, {departments:[ {department_name:'group_theory'}, {shelves:[ {shelf_name:'letters_f_z}, {authors: ["Frobenius","Lagrange"]}]}]}] }; ``` i can get data from: ``` log(library.floors[0].floor_name); // ouputs 'fermat' log(library.floors[2].departments[0].department_name); // outputs 'group_theory' log(library.floors[2].departments[1].shelves[1].authors[1]); // outputs 'Lagrange' ``` but it isn't what i need, i can't access a 'departments' property for each floor for instance... and i'd like to add data dynamically in these arrays. i'm doing it wrong and i can't figure out how to do it right... thanks for your help!<issue_comment>username_1: You're misunderstanding JavaScript object literals. They are completely different that structs in C. Structs are type declarations, JavaScript object literals are **actual objects**. You should stop putting one key/value pair per object, and start making uniform objects that contain the same attributes. When you write `[ { department_name: ... }, { shelves: ... } ]` you are defining an array that contains two unrelated objects, one containing a `department_name` property, and the other containing a `shelves` property. These objects know nothing about each other, or that they are contained in the same array. Instead of this... ``` [ { department_name: 'group_theory' }, { shelves: [ { shelf_name: 'letters_f_z' }, { authors: ["Frobenius","Lagrange"] } ] } ``` You should be writing: ``` { departments: [ { # This entire object is a department. It has a name and shelves name: 'group_theory', shelves: [ { # This entire object is a shelf. It has a name and authors. name: 'letters_f_z', authors: ["Frobenius","Lagrange"], } ] } ] } ``` To put it a different way, when you write this... ``` floors: [ {floor_name:'Fermat'}, {has_a_printing_machine:true}, {departments:[...]} ] ``` You are not making an array of floors, you're making an array of three totally unrelated objects, one that contains a `floor_name` property, one that contains a `has_a_printing_machine` property, and one that contains a `departments` property. If you want all three objects to have all three properties, you need to declare them that way: ``` floors: [ { name: 'floor_one', has_a_printing_machine: true, departments: [] }, { name: 'floor_two', has_a_printing_machine: false, departments: [ ... ] }, { name: 'floor_three', has_a_printing_machine: true, departments: [] }, ] ``` Upvotes: 2 <issue_comment>username_2: You **almost** had it. Each floor is an entity on its own, with its properties. One of its properties is the list of departments, and each department is an entity itself too (again, with its own properties). And same with the shelves. Each instance of an entity/struct/register is a dictionary, with its properties mapped as `key:value` pairs. So, for instance, a shelf could be: ``` var my_shelf = { shelf_name:'letters_f_z', authors: ["Frobenius","Lagrange"] }; ``` A department (with its shelves) may be: ``` { department_name:'group_theory', shelves:[ { shelf_name:'letters_f_z', authors: ["Frobenius","Lagrange"] }, {...} ] } ``` I think at this point you can extrapolate this construction to the floors level, having an array of department registries as the value of the `departments` property. Unfortunately, unlike TypeScript, which is a statically-typed superset of JavaScript, you can't enforce each register to actually have a certain set of properties. So you'll have to be extra-cautious when creating the instances so every property is initialized since its very beginning. Upvotes: 0
2018/03/16
1,500
4,749
<issue_start>username_0: I wrote code to reverse a sentence that the user inputs, but when I run this code and write a sentence, this code prints meaningless figures instead of reverse version of my sentence. I need helping locating the error ``` #include #include void reverser(char\*); int readmassage(char[], int); int main() { char mysentence[30]; readmassage(mysentence, 30); reverser(mysentence); printf("%s", mysentence); system("pause"); return 0; } void reverser(char \*massage) { char temp,\*p; p = massage + strlen(massage)-1; while (p > massage) { temp = \*massage; \*massage = \*p; \*p-- = temp; } } int readmassage(char massage[], int lenght) { int ch, i = 0; while (ch = getchar() != '\n') { if (lenght > i) massage[i++] = ch; } massage[i] = '\0'; return i; } ```<issue_comment>username_1: Your problem is here: ``` temp = *massage; *massage = *p; *p-- = temp; ``` `massage` always points to the first character in your string here. So you keep overwriting the first character, and then writing the new first character to the last character on the next go round. The effect is that you essentially rotate the string by one character instead of reversing it. If you change `*massage = *p;` to `*massage++ = *p;` (or create a new incrementing pointer variable to correspond to `p` which you initialize to `massage`), it'll probably work. What I'm saying is that your "start of the string" pointer needs to be... massaged. *ba dum chshhhh* **EDIT:** And you also need to change this, in `readmessage`: ``` while (ch = getchar() != '\n') ``` to: ``` while ((ch = getchar()) != '\n') ``` Otherwise, rather than the input character, you're actually setting `ch` to `0` or `1`, depending on whether `getchar()` is returning `'\n'` or not. This is because due to the order of operations, `!=` actually gets executed before `=`, and `!=` gives you `1` if the expression is true and `0` if it is false. This `0` or `1` then gets stored in `ch` by the `=` operator. You could also just replace all of `readmessage` with `fgets`, as it's included in the standard library and meant for exactly this sort of thing (unless reimplementing it was part of your assignment, of course). Upvotes: 3 [selected_answer]<issue_comment>username_2: please debugg to find out the reason sometimes.. It will help you.. The problem is when you call readmassage(mysentence, 30) inside the function you have the string input. When the flow comes back to the method the value is no more there as you had passed mysentence by value.. And you send null to reverse().. Pass by reference will work.. Upvotes: -1 <issue_comment>username_3: **EDIT:** Adding explanation. There are some issues with your code, few major, few not that serious 1. Don't use `void main()`. It's not the best thing to use. 2. Indenting your code will make it easier to read 3. Let's look at the `readmassage` function. 4. You are assigning the result of `getchar()` to an `int`. That's not right. it should be a `char` here's a version of your code with minor fixes ``` #include #include void reverser(char\*); int readmassage(char[], int); void main() { char mysentence[30]; readmassage(mysentence, 30); printf("Input: %s", mysentence); reverser(mysentence); printf("Output: %s", mysentence); system("pause"); return 0; } void reverser(char\*massage) { char temp,\*p; p = massage + strlen(massage)-1; while (p > massage) { temp = \*massage; \*massage = \*p; \*p-- = temp; }} int readmassage(char massage[], int lenght) { char ch; int i = 0; while (ch != '\n') { ch = getchar(); printf("Here\n"); if (lenght > i) { massage[i++] = ch; printf("%c\n", massage[i]); } } massage[i] = '\0'; return i; } ``` When you try this you will find that the input string gets read fine, just be sure to not to hi `Enter` after every character and instead type the full string out as `\n` is a valid character that can be in the stream and will terminate the loop the next iteration. This is why you should use a char array reading function like `fgets`. Once you get here you can see that the `reverser` is just following a slightly flawed reversing logic. grammatical errors aside, fixing this you can get here you go ``` #include #include void reverser(char\* message) { int start, end; start = 0; end = strlen(message) - 1; char temp; while (start < end) { temp = message[start]; message[start] = message[end]; message[end] = temp; start++; end--; } } void readmessage(char message[], int length) { fgets(message, length , stdin); } int main() { char mysentence[30]; readmessage(mysentence, 30); printf("Read the string\n"); reverser(mysentence); printf("%s\n", mysentence); return 0; } ``` Upvotes: -1
2018/03/16
1,303
4,703
<issue_start>username_0: I am new to Ionic 3 and mobile development. I am trying to connect a MySQL DB to my Ionic app and a PHP Restful API. I tested the API with Postman and it is working just fine, in order to implement it in Ionic I did the following, I first made a provider named Authservice: ``` import { Injectable } from '@angular/core'; import { HttpClient, HttpHeaders } from '@angular/common/http'; import 'rxjs/add/operator/map'; let apiUrl = "http://localhost/api/" /* Generated class for the AuthServiceProvider provider. See https://angular.io/guide/dependency-injection for more info on and Angular DI. */ @Injectable() export class AuthServiceProvider { constructor(public http: HttpClient) { console.log('Hello AuthServiceProvider Provider'); } postData(credentials, type) { return new Promise((resolve, reject) => { let headers = new HttpHeaders(); this.http.post(apiUrl + type, JSON.stringify(credentials), { headers: headers }) .subscribe(res => { resolve(res.json()); }, (err) => { reject(err); }); }); } } ``` And a Signup page: ``` import { Component } from '@angular/core'; import { IonicPage, NavController, NavParams } from 'ionic-angular'; import { AuthServiceProvider } from '../../providers/auth-service/auth- service'; /** * Generated class for the SignupPage page. * * See https://ionicframework.com/docs/components/#navigation for more info on * Ionic pages and navigation. */ @IonicPage() @Component({ selector: 'page-signup', templateUrl: 'signup.html', }) export class SignupPage { responseData: any; userData = {"username": "","<PASSWORD>": "", "name": "","email": ""}; constructor(public navCtrl: NavController, public authServiceProvider: AuthServiceProvider) { } signUp() { this.authServiceProvider.postData(this.userData, "signup").then((result) =>{ this.responseData = result; console.log(this.responseData); localStorage.setItem('userData', JSON.stringify(this.responseData)); }); } goToLogin() { this.navCtrl.pop(); } } ``` When running this I am getting an Uncaught (in promise): [object Object] error as can be [seen here.](https://i.stack.imgur.com/gtYdO.png) --- **UPDATE** I am now getting the following error: ``` Object { headers: {…}, status: 404, statusText: "Not Found", url: "http://localhost/PHP-SLIM-RESTFUL/API/signup", ok: false, name: "HttpErrorResponse", message: "Http failure response for http://localhost/PHP-SLIM-RESTFUL/API/signup: 404 Not Found", error: "404 Page Not Foundbody{margin:0;padding:30px;font:12px/1.5 Helvetica,Arial,Verdana,sans-serif;}h1{margin:0;font-size:48px;font-weight:normal;line-height:48px;}strong{display:inline-block;width:65px;}404 Page Not Found ================== The page you are looking for could not be found. Check the address bar to ensure your URL is spelled correctly. If all else fails, you can visit our home page at the link below. [Visit the Home Page](\"/PHP-Slim-Restful/api/\")" } signup.ts:36:6 ```<issue_comment>username_1: You can make use of Typescript's [`async`](https://basarat.gitbooks.io/typescript/docs/async-await.html) methods to make your life easier Your `postData` method in async **AuthServiceProvider:** ``` public async postData(credentials, type): Promise { let headers = new HttpHeaders(); await this.http.post(apiUrl + type, JSON.stringify(credentials), { headers: headers }).toPromise(); } ``` **Signup page:** ``` public async signUp(): void { try { // request successful this.responseData = await this.authServiceProvider.postData(this.userData, "signup"); console.log(this.responseData); localStorage.setItem('userData', JSON.stringify(this.responseData)); } catch(e) { // some error occured, handle it here.. console.log(e); } } ``` Don't forget to import [`toPromise`](https://www.learnrxjs.io/operators/utility/topromise.html) operator in `AuthServiceProvider` ``` import 'rxjs/add/operator/toPromise'; ``` Upvotes: 1 [selected_answer]<issue_comment>username_2: ``` postData(credentials, type) { let headers = new HttpHeaders(); return this.http.post(apiUrl + type, JSON.stringify(credentials), { headers: headers }); } ``` this will return observable on the signup page, just subscribe it. Upvotes: 1 <issue_comment>username_3: Try importing HttpModule in app.module.ts; `{import HttpModule } from '@angular/http'` Then add HttpModule to the imports; `imports : [BrowserModule, HttpModule, IonicModule.forRoot(MyApp) ]` Upvotes: 0
2018/03/16
1,187
3,573
<issue_start>username_0: This dockerfile installs nodejs version 4.2 and I cant understand why. could someone please help me install node 9.2. i've tried taking out the -- no install-recommends command to no avail. adding more text her because stack would not let me post this even though it is a very simple question that I've looked on the web for quite some time about to no avail.adding more text her because stack would not let me post this even though it is a very simple question that I've looked on the web for quite some time about to no avail. ``` FROM ubuntu:16.04 RUN apt-get update && apt-get install -y --no-install-recommends curl sudo RUN curl -sL https://deb.nodesource.com/setup_9.x | sudo -E bash - RUN apt-get install -y nodejs && \ apt-get install --yes build-essential RUN apt-get install --yes npm #VOLUME "/usr/local/app" # Set up C++ dev env RUN apt-get update && \ apt-get dist-upgrade -y && \ apt-get install gcc-multilib g++-multilib cmake wget -y && \ apt-get clean autoclean && \ apt-get autoremove -y #wget -O /tmp/conan.deb -L https://github.com/conan-io/conan/releases/download/0.25.1/conan-ubuntu-64_0_25_1.deb && \ #dpkg -i /tmp/conan.deb #ADD ./scripts/cmake-build.sh /build.sh #RUN chmod +x /build.sh #RUN /build.sh RUN mkdir -p /usr/local/app WORKDIR /usr/local/app COPY package.json /usr/local/app RUN ["npm", "install"] COPY . . RUN echo "/usr/local/app/dm" > /etc/ld.so.conf.d/mythrift.conf RUN echo "/usr/lib/x86_64-linux-gnu" >> /etc/ld.so.conf.d/mythrift.conf RUN echo "/usr/local/lib64" >> /etc/ld.so.conf.d/mythrift.conf RUN ldconfig RUN chmod +x dm/dm3 RUN ldd dm/dm3 RUN ["chmod", "+x", "dm/dm3"] RUN ["chmod", "777", "policy"] RUN ls -al . RUN ["nodejs", "-v"] CMD ["nodejs", "-v"] ```<issue_comment>username_1: **EDIT** Apparently it's important for the OP to run exactly this version of ubuntu. Here's a sample that builds on top of `FROM ubuntu:16.04`: ``` FROM ubuntu:16.04 RUN apt-get update && apt-get install -y --reinstall ca-certificates curl build-essential \ && curl -s https://nodejs.org/dist/v9.9.0/node-v9.9.0-linux-x64.tar.xz \ -o node-v9.9.0-linux-x64.tar.xz && tar xf node-v9.9.0-linux-x64.tar.xz \ && cd node-v9.9.0-linux-x64 && cp -r bin include lib share /usr/local \ && rm -rf /node-v9.9.0-linux-x64.tar.xz /node-v9.9.0-linux-x64 CMD ["node", "-v"] ``` **Build** ``` docker build -t testing . ``` **Test** ``` docker run testing v9.9.0 ``` Note that this only takes care of the node related things and don't take into account all the other dependencies. Upvotes: 3 [selected_answer]<issue_comment>username_2: The reason you are getting node 4 is because `apt-get` only installs the default version of a package which will never be the cutting edge latest. Whilst this issue is present in a Docker container, it is not specific to Docker as it will happen on any Ubuntu installation, both inside or outside of Docker. To get the latest version you have 2 options. (1) Install using a PPA: ``` cd ~ curl -sL https://deb.nodesource.com/setup_9.x -o nodesource_setup.sh sudo bash nodesource_setup.sh sudo apt-get install nodejs nodejs -v ``` (2) Install using Node Version Manager (nvm) The latter is great because it lets you install multiple versions of Node and jump between them very quickly. Here's a link to an amazing Digital Ocean article on this very topic: <https://www.digitalocean.com/community/tutorials/how-to-install-node-js-on-ubuntu-16-04> Here's a link to NVM ... <https://github.com/creationix/nvm> Upvotes: 1
2018/03/16
701
2,746
<issue_start>username_0: Setting up authentication with reactjs, firebase (google auth), react-router and redux. The problem is very simple but I can't find any resource online or answers to fix it. Unable to read roperty of uid (user id with firebase) because it's telling me it's undefined? I've set this up that Private routes is a new component and it's been imported in my app router. I also plan to have a public routes as well. Here is my code along with screenshots of error. **PrivateRoute.js** ``` import React from 'react'; import { connect } from 'react-redux'; import { Route, Redirect } from 'react-router-dom'; export const PrivateRoute = (props) => ( ); const mapStateToProps = (state) => ({ isAuthenticated: !!state.auth.uid <-error on this uid }); export default connect(mapStateToProps)(PrivateRoute); ``` **AppRouter.js** ``` import PrivateRoute from './PrivateRoute' ``` Screenshot of error when I'm logged out and try accessing `/create` private route [![Console errors: chrome](https://i.stack.imgur.com/dLs30.png)](https://i.stack.imgur.com/dLs30.png) *Updated to add redux store configure file* ``` import authenticationReducer from '../reducers/authentication' export default () => { const store = createStore( combineReducers({ expenses: expensesReducer, authentication: authenticationReducer }), composeEnhancers(applyMiddleware(thunk)) ); return store; }; ``` **Auth reducer (just incase it's needed)** ``` export default (state = {}, action) => { switch (action.type) { case 'LOGIN': return { uid: action.uid }; case 'LOGOUT': return { }; default: return state; } }; ```<issue_comment>username_1: You are setting the `uid` on `state.authentication.uid` and trying to access it from `state.auth.uid` Upvotes: 1 <issue_comment>username_2: `Cannot read property 'uid' of undefined` - means you are trying something like `variable.uid` and `variable` is undefined. Based on the line with the error, `state.auth` is undefined. You should be able to look at your state there, either debug or just throw a `console.log` in your `mapStateToProps` to see what your state actually looks like: ``` const mapStateToProps = (state) => { console.log('state:', state); // see what state is return { isAuthenticated: !!state.auth.uid <-error on this uid }; } ``` Looking at `combineReducers` is seems like you are putting the result of your `authenticationReducer` onto `state.authentication`, not `state.auth`... ``` combineReducers({ expenses: expensesReducer, authentication: authenticationReducer }), ``` Upvotes: 3 [selected_answer]
2018/03/16
899
3,084
<issue_start>username_0: I have kinda complex question. Let's say that I have 7 tables (20mil+ rows each) (Table1, Table2 ...) with corresponding pk (pk1, pk2, ....) (cardinality among all tables is 1:1) I want to get my final table (using hash join) as: ``` Create table final_table as select t1.column1, t2.column2, t3.column3, t4.column4, t5.column5, t6.column6, t7.column7 from table1 t1 join table2 t2 on t1.pk1 = t2.pk2 join table2 t3 on t1.pk1 = t3.pk3 join table2 t4 on t1.pk1 = t4.pk4 join table2 t5 on t1.pk1 = t5.pk5 join table2 t6 on t1.pk1 = t6.pk6 join table2 t7 on t1.pk1 = t7.pk7 ``` I would like to know if it would be faster to create partial tables and then final table, like this? ``` Create table partial_table1 as select t1.column1, t2.column2 from table1 t1 join table2 t2 on t1.pk1 = t2.pk2 create table partial_table2 as select t1.column1, t1.column2 t3.column3 from partial_table1 t1 join table3 t3 on t1.pk1 = t3.pk3 create table partial_table3 as select t1.column1, t1.column2, t1.column3 t4.column4 from partial_table1 t1 join table3 t4 on t1.pk1 = t4.pk4 ... ... ... ``` I know it depends on RAM (because I want to use hash join), actual server usage, etc.. I am not looking for specific answer, I am looking for some explanations why and in what situations would it be better to use partial results or why it would it be better to use all 7 joins in 1 select. Thanks, I hope that my question is easy to understand.<issue_comment>username_1: In general, it is not better to create temporary tables. SQL engines have an optimization phase and this optimization phase should do well as figuring out the best query plan. In the case of a bunch of joins, this is mostly about join order, use of indexes, and the optimal algorithm. This is a good default attitude. Does it mean that temporary tables are never useful for performance optimization? Not at all. Here are some exceptions: * The optimizer generates a suboptimal query plan. In this case, query hints can push the optimizer in the right direction. And, temporary tables can help. * Indexing the temporary tables. Sometimes an index on the temporary tables can be a big win for performance. The optimizer might not pick this up. * Re-use of temporary tables across queries. For your particular goal of using hash joins, you can use a query hint to ensure that the optimizer does what you would like. I should note that if the joins are on primary keys, then a hash join might not be the optimal algorithm. Upvotes: 2 [selected_answer]<issue_comment>username_2: It is not a good idea to create temporary tables in your database. To Optimize your query for reporting purposes or faster results trying using views and it can lead to much better results. For your specific case, you want to use hash join can you please explain a bit more like why you want to use that in particular because the optimizer will determine the best plan by itself and you don't need to worry about the type of join it performs. Upvotes: 0
2018/03/16
2,585
7,131
<issue_start>username_0: In simple terms, I'm trying to add columns `latitude` and `longitude` from `df1` to a smaller DataFrame called `df2` by comparing the values from their `air_id` and `hpg_id` columns: [![enter image description here](https://i.stack.imgur.com/svr7p.png)](https://i.stack.imgur.com/svr7p.png) The trick to add `latitude` and `longitude` to `df2` relies on how the comparison is made to `df1`, which could be one of 3 cases: * When there is a match between `df2.air_id` AND `df1.air_hd`; * When there is a match between `df2.hpg_id` AND `df1.hpg_hd`; * When there is a match between both of them: `[df2.air_id, df2.hpg_id]` AND `[df1.air_hd, df1.hpg_id]`; With that in mind, the expected result should be: [![enter image description here](https://i.stack.imgur.com/8CGYZ.png)](https://i.stack.imgur.com/8CGYZ.png) Notice how the `ignore_me` column from `df1` was left out of the resulting DataFrame. Here is the code to setup the DataFrames: ``` data = { 'air_id' : [ 'air1', '', 'air3', 'air4', 'air2', 'air1' ], 'hpg_id' : [ 'hpg1', 'hpg2', '', 'hpg4', '', '' ], 'latitude' : [ 101.1, 102.2, 103, 104, 102, 101.1, ], 'longitude' : [ 51, 52, 53, 54, 52, 51, ], 'ignore_me' : [ 91, 92, 93, 94, 95, 96 ] } df1 = pd.DataFrame(data) display(df1) data2 = { 'air_id' : [ '', 'air2', 'air3', 'air1' ], 'hpg_id' : [ 'hpg1', 'hpg2', '', '' ] } df2 = pd.DataFrame(data2) display(df2) ``` Unfortunately, I'm failing to use `merge()` for this task. My current result is a DataFrame with all columns from `df1` mostly filled with *NaNs*: [![enter image description here](https://i.stack.imgur.com/CYCYf.png)](https://i.stack.imgur.com/CYCYf.png) How can I copy these specific columns from `df1` using the rules above?<issue_comment>username_1: Here is one way to accomplish what you're trying to do. First use `merge()` twice. First on `air_id`, then on `hpg_id`. For both, ignore the trivial case when the key is an empty string. ``` result = df2\ .merge( df1[df1['air_id']!=''].drop(['hpg_id'], axis=1), on=['air_id'], how='left' )\ .merge( df1[df1['hpg_id']!=''].drop(['air_id'], axis=1), on=['hpg_id'], how='left' ) print(result) # air_id hpg_id ignore_me_x latitude_x longitude_x ignore_me_y \ #0 hpg1 NaN NaN NaN 91 #1 air2 hpg2 92.0 102.0 52.0 92 #2 hpg3 NaN NaN NaN 93 # # latitude_y longitude_y #0 101 51 #1 102 52 #2 103 53 ``` However, this creates duplicates for the columns you want. (I drop the other join key on each call to merge to avoid duplicate column names for those.) We can coalesce these values by adapting one of the methods described on [this post](https://stackoverflow.com/questions/38152389/coalesce-values-from-2-columns-into-a-single-column-in-a-pandas-dataframe). ``` cols = ['latitude', 'longitude'] colsx = list(map(lambda c: c+"_x", cols)) # list needed for python3 colsy = list(map(lambda c: c+"_y", cols)) # list needed for python3 result[cols] = pd.DataFrame( np.where(result[colsx].isnull() == True, result[colsy], result[colsx]) ) result = result[['air_id', 'hpg_id'] + cols] print(result) # air_id hpg_id latitude longitude #0 hpg1 101.0 51.0 #1 air2 hpg2 102.0 52.0 #2 air3 103.0 53.0 ``` --- **UPDATE** In the case where the merge produces duplicate entries, you can use [`pandas.DataFrame.drop_duplicates()`](http://pandas.pydata.org/pandas-docs/version/0.17/generated/pandas.DataFrame.drop_duplicates.html). ``` result = result.drop_duplicates() ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: This is one manual way without merges. It isn't efficient, but might be manageable if it performs adequately for your use case. ``` df1['lat_long'] = list(zip(df1['latitude'], df1['longitude'])) air = df1[df1['air_id'] != ''].set_index('air_id')['lat_long'] hpg = df1[df1['hpg_id'] != ''].set_index('hpg_id')['lat_long'] def mapper(row): myair, myhpg = row['air_id'], row['hpg_id'] if (myair != '') and (myair in air): return air.get(myair) elif (myhpg != '') and (myhpg in hpg): return hpg.get(myhpg) elif (myair != '') and (myair in hpg): return hpg.get(myair) elif (myhpg != '') and (myhpg in air): return air.get(myhpg) else: return (None, None) df2['lat_long'] = df2.apply(mapper, axis=1) df2[['latitude', 'longitude']] = df2['lat_long'].apply(pd.Series) df2 = df2.drop('lat_long', 1) # air_id hpg_id latitude longitude # 0 hpg1 101 51 # 1 air2 hpg2 102 52 # 2 hpg3 103 53 ``` Upvotes: 1 <issue_comment>username_3: Using sets and Numpy broadcasting to handle the matching of stuff... sprinkled with fairy dust ``` ids = ['air_id', 'hpg_id'] cols = ['latitude', 'longitude'] def true(s): return s.astype(bool) s2 = df2.stack().loc[true].groupby(level=0).apply(set) s1 = df1[ids].stack().loc[true].groupby(level=0).apply(set) i, j = np.where((s1.values & s2.values[:, None]).astype(bool)) a = np.zeros((len(df2), 2), int) a[i, :] = df1[cols].values[j] df2.join(pd.DataFrame(a, df2.index, cols)) air_id hpg_id latitude longitude 0 hpg1 101 51 1 air2 hpg2 102 52 2 hpg3 103 53 ``` --- **Details** `s2` looks like this ``` 0 {hpg1} 1 {air2, hpg2} 2 {hpg3} dtype: object ``` And `s1` ``` 0 {air1, hpg1} 1 {hpg2} 2 {hpg3} 3 {air4, hpg4} 4 {air2} dtype: object ``` The point is that we wanted to find if anything from that row matched with anything else in a row from the other data frame. Now I can use broadcasting and `&` ``` s1.values & s2.values[:, None] array([[{'hpg1'}, set(), set(), set(), set()], [set(), {'hpg2'}, set(), set(), {'air2'}], [set(), set(), {'hpg3'}, set(), set()]], dtype=object) ``` But empty sets evaluate to `False` in a boolean context so ``` (s1.values & s2.values[:, None]).astype(bool) array([[ True, False, False, False, False], [False, True, False, False, True], [False, False, True, False, False]], dtype=bool) ``` And now I can use `np.where` to show me where these `True`s are ``` i, j = np.where((s1.values & s2.values[:, None]).astype(bool)) print(i, j) [0 1 1 2] [0 1 4 2] ``` Those are the the rows from `df2` and `df1` respectively. But I don't need two row `1` so I create an empty array of the appropriate size with the expectations that I'll overwrite row `1`. I fill these values with the lats and lons from `df1` ``` a = np.zeros((len(df2), 2), int) a[i, :] = df1[cols].values[j] a array([[101, 51], [102, 52], [103, 53]]) ``` I then wrap this in a `pd.DataFrame` and join as we see above. Upvotes: 2
2018/03/16
286
974
<issue_start>username_0: I'm doing some VBA recently for work and I need to check in a webpage to click a button if is enable and don't click if is disabled. BUT! I don't know how to make VBA check the disabled button. Here is the button code: ```htm ``` I tried some If, but didn't work ``` Set nxt = IE.document.getElementById("formAction:proxima") If nxt Is Nothing Then IE.Quit Else nxt.Click End If ```<issue_comment>username_1: You need to access the button `disabled` property. Try this: ``` Set nxt = IE.document.getElementById("formAction:proxima") If nxt.getAttribute("disabled") = "disabled" Then IE.Quit Else nxt.Click End If ``` Upvotes: 2 <issue_comment>username_2: You can also use the `.disabled` property *(Boolean)*. ``` Dim nxt As HTMLButtonElement Set nxt = IE.document.getElementById("formAction:proxima") If nxt.disabled Then ie.Quit Else nxt.Click End If ``` Upvotes: 4 [selected_answer]
2018/03/16
1,196
2,720
<issue_start>username_0: I am trying to add rows to a data frame based on the minimum and maximum data **within** each group. Suppose this is my original data frame: ``` df = data.frame(Date = as.Date(c("2017-12-01", "2018-01-01", "2017-12-01", "2018-01-01", "2018-02-01","2017-12-01", "2018-02-01")), Group = c(1,1,2,2,2,3,3), Value = c(100, 200, 150, 125, 200, 150, 175)) ``` Notice that Group 1 has 2 consecutive dates, group 2 has 3 consecutive dates, and group 3 is missing the date in the middle (2018-01-01). I'd like to be able to complete the data frame by adding rows for missing dates. But the thing is I only want to add additional dates based on dates that are missing between the minimum and maximum date within each group. So if I were to complete this data frame it would look like this: ``` df_complete = data.frame(Date = as.Date(c("2017-12-01", "2018-01-01", "2017-12-01", "2018-01-01", "2018-02-01","2017-12-01","2018-01-01", "2018-02-01")), Group = c(1,1,2,2,2,3,3,3), Value = c(100, 200, 150, 125, 200, 150,NA, 175)) ``` Only one row was added because Group 3 was missing one date. There was no date added for Group 1 because it had all the dates between its minimum (2017-12-01) and maximum date (2018-01-01).<issue_comment>username_1: You can use `tidyr::complete` with `dplyr` to find a solution. The `interval` between consecutive dates seems to be `month`. The approach will be as below: ``` library(dplyr) library(tidyr) df %>% group_by(Group) %>% complete(Group, Date = seq.Date(min(Date), max(Date), by = "month")) # A tibble: 8 x 3 # Groups: Group [3] # Group Date Value # # 1 1.00 2017-12-01 100 # 2 1.00 2018-01-01 200 # 3 2.00 2017-12-01 150 # 4 2.00 2018-01-01 125 # 5 2.00 2018-02-01 200 # 6 3.00 2017-12-01 150 # 7 3.00 2018-01-01 NA # 8 3.00 2018-02-01 175 ``` Data ``` df = data.frame(Date = as.Date(c("2017-12-01", "2018-01-01", "2017-12-01", "2018-01-01", "2018-02-01","2017-12-01", "2018-02-01")), Group = c(1,1,2,2,2,3,3), Value = c(100, 200, 150, 125, 200, 150, 175)) ``` Upvotes: 4 <issue_comment>username_2: [@username_1's approach](https://stackoverflow.com/a/49330319) of using `tidyr::complete` with `dplyr` is good, but will fail if the group column is not numeric. It will then be typecast as factors and the `complete()` operation will then result in a tibble with a row for every factor/time combination for each group. `complete()` does not need the group variable as first argument, so the solution is ``` library(dplyr) library(tidyr) df %>% group_by(Group) %>% complete(Date = seq.Date(min(Date), max(Date), by = "month")) ``` Upvotes: 2
2018/03/16
2,094
7,787
<issue_start>username_0: I'm new on VueJS ans Webpack. I've created a project with VueJS CLI and trying to work with it. I need to insert an CDN to my code. When working with standard HTML, CSS & JS solutions, I'd include CDNs like this: ```html False Merge ``` As you can see, you can add a CDN script with the HTML script tag, and start using it in the JS. I'm trying to do the same with VueJS in a component. I've got the template and style sections ready. Unfortunately, I don't know how to add in a simple way a CDN to use inmediately in the script tag within the Vue component. I tried to do this but it is not working. ```html export default { name: 'Index', data() { return { } } } ``` Is there a way to add a CDN (without Webpack or NPM) to a VueJS component?<issue_comment>username_1: Unfortunately, no, you can't add a `</code> tag to a specific component <strong>via template</strong>.</p> <p>In your case you have some options:</p> <h1>1: Use NPM</h1> <p>Propertly install the dependency using <code>npm</code></p> <ul> <li><strong><em>Pros:</em></strong> proper usage of NPM and Webpack; scoped definition;</li> <li><strong><em>Cons:</em></strong> the script must be available as a NPM package.</li> <li>Note: when available this is the <strong><em>recommended</em></strong> approach.</li> <li><p>Steps:</p> <ul> <li><p>For your case, <a href="https://datatables.net/download/" rel="noreferrer">you can check in <code>datatables</code> official page they do have a NPM package</a>. I could be used like:</p> <pre><code>npm install --save datatables.net-dt </code></pre></li> <li><p>And in your <strong>.vue</strong> file:</p> <pre><code><script> require( 'datatables.net-dt' )(); export default { name: 'Index', data() { return { } } }` --- 2: Add `</code> tag to <code>index.html</code></h1> <p>Locate and a dd the <code><script></code> tag to your <code>index.html</code></p> <ul> <li><strong><em>Pros:</em></strong> the <code><script></code> tag is clearly (and declaratively) added to the HTML source. The script will only be loaded once.</li> <li><strong><em>Cons:</em></strong> the script will be globally loaded.</li> <li>Steps: <ul> <li>Just add the <code><script type="text/javascript" src="https://cdn.datatables.net/v/dt/dt-1.10.16/sl-1.2.5/datatables.min.js">` to the end of the `index.html` file, preferably right before . --- 3: Create the `</code> tag programatically</h1> <p>The other alternative is to create the <code>script</code> tag programatically at the component, when the component is lodaded.</p> <ul> <li><strong><em>Pros:</em></strong> the code stays in the component only. Your external script will be loaded only when the component is loaded.</li> <li><strong><em>Cons:</em></strong> the script still will be globally available once it is loaded.</li> <li><p>Steps/Code:</p> <pre><code><script> export default { name: 'Index', data() { return { } }, mounted() { if (document.getElementById('my-datatable')) return; // was already loaded var scriptTag = document.createElement("script"); scriptTag.src = "https://cdn.datatables.net/v/dt/dt-1.10.16/sl-1.2.5/datatables.min.js"; scriptTag.id = "my-datatable"; document.getElementsByTagName('head')[0].appendChild(scriptTag); } }` ==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Upvotes: 7 [selected_answer]<issue_comment>username_2: I don't know if this is still a concern, but you could also give vue-meta a look. I'm using it to create a better SEO implementation, but with it, you can include CSS, and/or JS files for specific components. You can even set the individual files to preload if you wanted. Here's a pretty good write-up. <https://alligator.io/vuejs/vue-seo-tips/> In there it says that vue-meta isn't stable, but the article was written in February of 2018, and the version as of today, is 2.2.1. 1. add this line to your package.json file within the dependencies object: `"vue-meta": "^2.2.1",` *note - omit the trailing comma if it's to be the last line of the dependencies object* 2. open a terminal and cd to the dir which contains above mentioned package.json file. (BTW, this is all super easy if you use the vue ui). 3. in the terminal run: `npm install` Then add the following to your main.js file. ``` import Meta from "vue-meta"; Vue.use(Meta); ``` Now you can freely load static CSS/JS assets. This works for local, or from cdn. Below is my example. *Disregard my imports, components and methods... they aren't related to vue-meta and may differ from yours. I just wanted to show you a working version.* ``` import { page } from "vue-analytics"; import Header from "@/components/Header.vue"; import Footer from "@/components/Footer.vue"; export default { components: { Header, Footer }, data: function() { return {}; }, methods: { track() { page("/"); } }, metaInfo: { link: [ { rel: "preload", as: "style", href: "https://cdn.jsdelivr.net/npm/bootstrap-vue@2.0.0-rc.28/dist/bootstrap-vue.min.css" }, { rel: "preload", as: "style", href: "https://fonts.googleapis.com/css?family=Cinzel|Great+Vibes|Montserra" }, { rel: "preload", as: "style", href: "/content/css/site.css" }, { rel: "stylesheet", href: "https://fonts.googleapis.com/css?family=Cinzel|Great+Vibes|Montserra" }, { rel: "stylesheet", href: "https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/css/bootstrap.min.css", integrity: "<KEY>", crossorigin: 'anonymous"' }, { rel: "stylesheet", href: "https://cdn.jsdelivr.net/npm/bootstrap-vue@2.0.0-rc.28/dist/bootstrap-vue.min.css", async: true, defer: true }, { rel: "stylesheet", href: "https://use.fontawesome.com/releases/v5.8.1/css/all.css", integrity: "<KEY>", crossorigin: 'anonymous"', async: true, defer: true }, { rel: "stylesheet", href: "/content/css/site.css", async: true,`enter code here` defer: true }, { rel: 'favicon', href: 'favicon.ico' } ], script: [{ src: "https://unpkg.com/axios/dist/axios.min.js", async: true, defer: true }], } }; ``` Upvotes: 2 <issue_comment>username_3: <https://renatello.com/vue-js-external-css> 1. Include CSS file in one component 2. Include globally 3. Include in index.html Upvotes: 0
2018/03/16
2,037
7,719
<issue_start>username_0: Just started with a company and noticed that their database was set to Simple Recovery. I talked to the owner and suggested to convert it to Full Recovery, explained to him the benefit of using a transaction log and backed up every hour. After he agreed I did a full DB backup prior to conversion. Then scheduled hourly backup for the Transaction log file and Full Nightly backup for the Data File. It was my impression that once the hourly backup started running, the size of the transaction log (60GB) would shrink. It's been more than a month but the size of the transaction log is still the same. Is it okay to run `DBCC ShrinkDB` against the Log file without detaching and attaching the database?<issue_comment>username_1: Unfortunately, no, you can't add a `</code> tag to a specific component <strong>via template</strong>.</p> <p>In your case you have some options:</p> <h1>1: Use NPM</h1> <p>Propertly install the dependency using <code>npm</code></p> <ul> <li><strong><em>Pros:</em></strong> proper usage of NPM and Webpack; scoped definition;</li> <li><strong><em>Cons:</em></strong> the script must be available as a NPM package.</li> <li>Note: when available this is the <strong><em>recommended</em></strong> approach.</li> <li><p>Steps:</p> <ul> <li><p>For your case, <a href="https://datatables.net/download/" rel="noreferrer">you can check in <code>datatables</code> official page they do have a NPM package</a>. I could be used like:</p> <pre><code>npm install --save datatables.net-dt </code></pre></li> <li><p>And in your <strong>.vue</strong> file:</p> <pre><code><script> require( 'datatables.net-dt' )(); export default { name: 'Index', data() { return { } } }` --- 2: Add `</code> tag to <code>index.html</code></h1> <p>Locate and a dd the <code><script></code> tag to your <code>index.html</code></p> <ul> <li><strong><em>Pros:</em></strong> the <code><script></code> tag is clearly (and declaratively) added to the HTML source. The script will only be loaded once.</li> <li><strong><em>Cons:</em></strong> the script will be globally loaded.</li> <li>Steps: <ul> <li>Just add the <code><script type="text/javascript" src="https://cdn.datatables.net/v/dt/dt-1.10.16/sl-1.2.5/datatables.min.js">` to the end of the `index.html` file, preferably right before . --- 3: Create the `</code> tag programatically</h1> <p>The other alternative is to create the <code>script</code> tag programatically at the component, when the component is lodaded.</p> <ul> <li><strong><em>Pros:</em></strong> the code stays in the component only. Your external script will be loaded only when the component is loaded.</li> <li><strong><em>Cons:</em></strong> the script still will be globally available once it is loaded.</li> <li><p>Steps/Code:</p> <pre><code><script> export default { name: 'Index', data() { return { } }, mounted() { if (document.getElementById('my-datatable')) return; // was already loaded var scriptTag = document.createElement("script"); scriptTag.src = "https://cdn.datatables.net/v/dt/dt-1.10.16/sl-1.2.5/datatables.min.js"; scriptTag.id = "my-datatable"; document.getElementsByTagName('head')[0].appendChild(scriptTag); } }` ==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Upvotes: 7 [selected_answer]<issue_comment>username_2: I don't know if this is still a concern, but you could also give vue-meta a look. I'm using it to create a better SEO implementation, but with it, you can include CSS, and/or JS files for specific components. You can even set the individual files to preload if you wanted. Here's a pretty good write-up. <https://alligator.io/vuejs/vue-seo-tips/> In there it says that vue-meta isn't stable, but the article was written in February of 2018, and the version as of today, is 2.2.1. 1. add this line to your package.json file within the dependencies object: `"vue-meta": "^2.2.1",` *note - omit the trailing comma if it's to be the last line of the dependencies object* 2. open a terminal and cd to the dir which contains above mentioned package.json file. (BTW, this is all super easy if you use the vue ui). 3. in the terminal run: `npm install` Then add the following to your main.js file. ``` import Meta from "vue-meta"; Vue.use(Meta); ``` Now you can freely load static CSS/JS assets. This works for local, or from cdn. Below is my example. *Disregard my imports, components and methods... they aren't related to vue-meta and may differ from yours. I just wanted to show you a working version.* ``` import { page } from "vue-analytics"; import Header from "@/components/Header.vue"; import Footer from "@/components/Footer.vue"; export default { components: { Header, Footer }, data: function() { return {}; }, methods: { track() { page("/"); } }, metaInfo: { link: [ { rel: "preload", as: "style", href: "https://cdn.jsdelivr.net/npm/bootstrap-vue@2.0.0-rc.28/dist/bootstrap-vue.min.css" }, { rel: "preload", as: "style", href: "https://fonts.googleapis.com/css?family=Cinzel|Great+Vibes|Montserra" }, { rel: "preload", as: "style", href: "/content/css/site.css" }, { rel: "stylesheet", href: "https://fonts.googleapis.com/css?family=Cinzel|Great+Vibes|Montserra" }, { rel: "stylesheet", href: "https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/css/bootstrap.min.css", integrity: "<KEY>", crossorigin: 'anonymous"' }, { rel: "stylesheet", href: "https://cdn.jsdelivr.net/npm/bootstrap-vue@2.0.0-rc.28/dist/bootstrap-vue.min.css", async: true, defer: true }, { rel: "stylesheet", href: "https://use.fontawesome.com/releases/v5.8.1/css/all.css", integrity: "<KEY>", crossorigin: 'anonymous"', async: true, defer: true }, { rel: "stylesheet", href: "/content/css/site.css", async: true,`enter code here` defer: true }, { rel: 'favicon', href: 'favicon.ico' } ], script: [{ src: "https://unpkg.com/axios/dist/axios.min.js", async: true, defer: true }], } }; ``` Upvotes: 2 <issue_comment>username_3: <https://renatello.com/vue-js-external-css> 1. Include CSS file in one component 2. Include globally 3. Include in index.html Upvotes: 0
2018/03/16
1,247
4,948
<issue_start>username_0: When I run the Firestore transaction it gives me the error : > > "Every document read in a transaction must also be written in that transaction." > > > But all the documents that I am trying to read in the transaction are also being written to in the code below ? ``` db.runTransaction({ (transaction, errorPointer) -> Any? in let doc1: DocumentSnapshot let doc2: DocumentSnapshot let doc3: DocumentSnapshot do { try doc1 = transaction.getDocument(self.doc1Ref) try doc2 = transaction.getDocument(self.doc2Ref) try doc3 = transaction.getDocument(self.doc3Ref) } catch let fetchError as NSError { errorPointer?.pointee = fetchError return nil } // WRITES TO DOC1, DOC2, DOC3 guard let userPostCount = doc1.data()?["postCount"] as? Int, let locationPostCount = doc2.data()?["postCount"] as? Int, let itemPostCount = doc3.data()?["postCount"] as? Int, let locationAvgRating = doc2.data()?["averageRating"] as? Float, let itemAvgRating = doc3.data()?["averageRating"] as? Float else { return nil } // Create post on another document not in transaction transaction.setData(post.dictionary, forDocument: self.doc4Ref) // Update counts for #userPosts, #locationPosts, #itemPosts, avgLocationRating, avgItemRating transaction.updateData(["postCount": userPostCount + 1], forDocument: self.doc1Ref) let newAvgLocationRating = ((avgLocationRating * Float(locationPostCount)) + Float(post.rating)) / (Float(locationPostCount) + 1.0) transaction.updateData(["postCount": locationPostCount + 1, "averageRating": newAvgLocationRating], forDocument: self.doc2Ref) let newAvgItemRating = ((avgItemRating * Float(itemPostCount)) + Float(post.rating)) / (Float(itemPostCount) + 1.0) transaction.updateData(["postCount": locationPostCount + 1, "averageRating": newAvgItemRating], forDocument: self.doc3Ref) // Add postID to user and location transaction.setData(["postID": self.postRef.documentID, "locationID": post.locationID, "itemID": post.itemID, "rating": post.rating, "timestamp": post.timestamp], forDocument: self.doc1Ref.collection("posts").document(self.postRef.documentID)) transaction.setData(["postID": self.postRef.documentID, "rating": post.rating, "timestamp": post.timestamp], forDocument: self.doc3Ref.collection("posts").document(self.postRef.documentID)) return nil }) { (object, error) in if let error = error { print(error) } else { print("done") } } ``` Is it just not possible to do multiple .getDocument(*documentReference*) in a transaction? Is there another way to accomplish this issue?<issue_comment>username_1: *I just noticed you are using Apple's swift scripting. My answer is standards based JavaScript but I hope it helps you. I don't use or know anything about Swift...* As far as reading data from 3 documents simultaneously, the way to do this is using javascript's `.promise.all`. While the below code is not precisely your solution, you can get the just of how to do it. You basically push each `.get` into an array then you `.promise.all` that array `.then` you iterate over the returned data. ``` try { targetRef.get().then(function(snap) { if (snap.exists) { for (var key in snap.data()) { if (snap.data().hasOwnProperty(key)) { fetchPromise = itemRef.doc(key).get(); fetchArray.push(fetchPromise); } } Promise.all(fetchArray).then(function(values) { populateSelectFromQueryValues(values,"order-panel-one-item-select"); if(!selectIsEmpty("order-panel-one-item-select")){ enableSelect("order-panel-one-item-select"); } }); } }).catch(function(error) { toast("error check console"); console.log("Error getting document:", error); }); } ``` Upvotes: 0 <issue_comment>username_2: you should put all your `if` operations in `do` and `else` operations in `catch` don't use them outside the block. thats what the error is saying `"Every document read in a transaction must also be written in that transaction."`. for ex. your code ``` // WRITES TO DOC1, DOC2, DOC3 guard let userPostCount = doc1.data()?["postCount"] as? Int, let locationPostCount = doc2.data()?["postCount"] as? Int, let itemPostCount = doc3.data()?["postCount"] as? Int, let locationAvgRating = doc2.data()?["averageRating"] as? Float, let itemAvgRating = doc3.data()?["averageRating"] as? Float else { return nil } ``` is doing operations with `doc1`,`doc2` outside the scope of do catch block Upvotes: 1
2018/03/16
600
2,525
<issue_start>username_0: I'm including a video in a page and have had no problems in Chrome or Safari thus far, but on Firefox the viedos are muted and the volume can't be changed. Here is the video code ``` Your browser doesn't support HTML5 video. Here is a ``` Any thoughts on how to fix this?<issue_comment>username_1: *I just noticed you are using Apple's swift scripting. My answer is standards based JavaScript but I hope it helps you. I don't use or know anything about Swift...* As far as reading data from 3 documents simultaneously, the way to do this is using javascript's `.promise.all`. While the below code is not precisely your solution, you can get the just of how to do it. You basically push each `.get` into an array then you `.promise.all` that array `.then` you iterate over the returned data. ``` try { targetRef.get().then(function(snap) { if (snap.exists) { for (var key in snap.data()) { if (snap.data().hasOwnProperty(key)) { fetchPromise = itemRef.doc(key).get(); fetchArray.push(fetchPromise); } } Promise.all(fetchArray).then(function(values) { populateSelectFromQueryValues(values,"order-panel-one-item-select"); if(!selectIsEmpty("order-panel-one-item-select")){ enableSelect("order-panel-one-item-select"); } }); } }).catch(function(error) { toast("error check console"); console.log("Error getting document:", error); }); } ``` Upvotes: 0 <issue_comment>username_2: you should put all your `if` operations in `do` and `else` operations in `catch` don't use them outside the block. thats what the error is saying `"Every document read in a transaction must also be written in that transaction."`. for ex. your code ``` // WRITES TO DOC1, DOC2, DOC3 guard let userPostCount = doc1.data()?["postCount"] as? Int, let locationPostCount = doc2.data()?["postCount"] as? Int, let itemPostCount = doc3.data()?["postCount"] as? Int, let locationAvgRating = doc2.data()?["averageRating"] as? Float, let itemAvgRating = doc3.data()?["averageRating"] as? Float else { return nil } ``` is doing operations with `doc1`,`doc2` outside the scope of do catch block Upvotes: 1
2018/03/16
1,026
3,901
<issue_start>username_0: I have an Ionic application where I have created a component to show some data of an object. My problem is that when I update the data in the parent that hosts the component the data within the component does not update: my-card.component.ts ``` @Component({ selector: 'my-card', templateUrl: './my-card.html' }) export class MyCard { @Input('item') public item: any; @Output() itemChange = new EventEmitter(); constructor() { } ngOnInit() { // I do an ajax call here and populate more fields in the item. this.getMoreData().subscribe(data => { if (data.item){ this.item = data.item; } this.itemChange.emit(this.item); }); } } ``` my-card.html ``` {{subitem.title}} ``` And in the parent I use the component like this: ``` ``` And the ts file for the parent: ``` @IonicPage() @Component({ selector: 'page-one', templateUrl: 'one.html', }) export class OnePage { public item = null; constructor(public navCtrl: NavController, public navParams: NavParams) { this.item = {id:1, subitems:[]}; } addSubItem():void{ // AJAX call to save the new item to DB and return the new subitem. this.addNewSubItem().subscribe(data => { let newSubItem = data.item; this.item.subitems.push(newSubItem); } } } ``` So when I call the addSubItem() function it doesnt update the component and the ngFor loop still doesnt display anything.<issue_comment>username_1: If the `getMoreData` method returns an observable, this code needs to look as follows: ``` ngOnInit() { // I do an ajax call here and populate more fields in the item. this.getMoreData().subscribe( updatedItem => this.item = updatedItem ); } ``` The subscribe causes the async operation to execute and returns an observable. When the data comes back from the async operation, it executes the provided callback function and assigns the item to the returned item. Upvotes: 0 <issue_comment>username_2: You declared item with `@Input()` decorator as: ``` @Input('item') public item: any; ``` But you use two-way binding on it: ``` ``` If it is input only, it should be ``` ``` Now if you invoke `addSubItem()` it should display the new added item. ``` this.item = this.getMoreData(); ``` The `getMoreData()` doesn't make sense if you put it in your card component as you want to use the item passed via `@Input()` Upvotes: 0 <issue_comment>username_3: Your component interactions are a little off. Check out the guide on the Angular docs (<https://angular.io/guide/component-interaction>). Specifically, using ngOnChanges (<https://angular.io/guide/component-interaction#intercept-input-property-changes-with-ngonchanges>) or use a service to subscribe and monitor changes between the parent and the child (<https://angular.io/guide/component-interaction#parent-and-children-communicate-via-a-service>). Upvotes: 0 <issue_comment>username_4: You are breaking the object reference when you are making the api request. You are assigning new value, that is overwriting the input value you get from the parent, and the objects are no longer pointing to the same object, but `item` in your child is a completely different object. As you want two-way-binding, we can make use of `Output`: Child: ``` import { EventEmitter, Output } from '@angular/core'; // .. @Input() item: any; @Output() itemChange = new EventEmitter(); ngOnInit() { // I do an ajax call here and populate more fields in the item. this.getMoreData(item.id).subscribe(data => { this.item = data; // 'recreate' the object reference this.itemChange.emit(this.item) }); } ``` Now we have the same object reference again and whatever you do in parent, will reflect in child. Upvotes: 2 [selected_answer]
2018/03/16
1,137
3,448
<issue_start>username_0: For some rectangular we can select all indices in a 2D array very efficiently: ``` arr[y:y+height, x:x+width] ``` ...where `(x, y)` is the upper-left corner of the rectangle and `height` and `width` the height (number of rows) and width (number of columns) of the rectangular selection. Now, let's say we want to select all indices in a 2D array located in a certain circle given center coordinates `(cx, cy)` and radius `r`. Is there a numpy function to achieve this efficiently? Currently I am pre-computing the indices manually by having a Python loop that adds indices into a buffer (list). Thus, this is pretty inefficent for large 2D arrays, since I need to queue up every integer lying in some circle. ``` # buffer for x & y indices indices_x = list() indices_y = list() # lower and upper index range x_lower, x_upper = int(max(cx-r, 0)), int(min(cx+r, arr.shape[1]-1)) y_lower, y_upper = int(max(cy-r, 0)), int(min(cy+r, arr.shape[0]-1)) range_x = range(x_lower, x_upper) range_y = range(y_lower, y_upper) # loop over all indices for y, x in product(range_y, range_x): # check if point lies within radius r if (x-cx)**2 + (y-cy)**2 < r**2: indices_y.append(y) indices_x.append(x) # circle indexing arr[(indices_y, indices_x)] ``` As mentioned, this procedure gets quite inefficient for larger arrays / circles. Any ideas for speeding things up? If there is a better way to index a circle, does this also apply for "arbitrary" 2D shapes? For example, could I somehow pass a function that expresses membership of points for an arbitrary shape to get the corresponding numpy indices of an array?<issue_comment>username_1: You could define a mask that contains the circle. Below, I have demonstrated it for a circle, but you could write any arbitrary function in the `mask` assignment. The field `mask` has the dimensions of `arr` and has the value `True` if the condition on the righthand side is satisfied, and `False` otherwise. This mask can be used in combination with the indexing operator to assign to only a selection of indices, as the line `arr[mask] = 123.` demonstrates. ``` import numpy as np import matplotlib.pyplot as plt x = np.arange(0, 32) y = np.arange(0, 32) arr = np.zeros((y.size, x.size)) cx = 12. cy = 16. r = 5. # The two lines below could be merged, but I stored the mask # for code clarity. mask = (x[np.newaxis,:]-cx)**2 + (y[:,np.newaxis]-cy)**2 < r**2 arr[mask] = 123. # This plot shows that only within the circle the value is set to 123. plt.figure(figsize=(6, 6)) plt.pcolormesh(x, y, arr) plt.colorbar() plt.show() ``` Upvotes: 5 [selected_answer]<issue_comment>username_2: Thank you username_1 for your answer, but I couldn't see radius 5 in the output.(diameter is 9 in output and not 10) One can **reduce .5** from **cx and cy** to produce diameter 10 ``` import numpy as np import matplotlib.pyplot as plt x = np.arange(0, 32) y = np.arange(0, 32) arr = np.zeros((y.size, x.size)) cx = 12.-.5 cy = 16.-.5 r = 5. # The two lines below could be merged, but I stored the mask # for code clarity. mask = (x[np.newaxis,:]-cx)**2 + (y[:,np.newaxis]-cy)**2 < r**2 arr[mask] = 123. # This plot shows that only within the circle the value is set to 123. plt.figure(figsize=(6, 6)) plt.pcolormesh(x, y, arr) plt.colorbar() plt.show() ``` [![difference](https://i.stack.imgur.com/9TjBI.png)](https://i.stack.imgur.com/9TjBI.png) Upvotes: -1
2018/03/16
287
1,060
<issue_start>username_0: I don't understand how this if statement executed as timer is nil ``` if timer != nil { logInfo("Cancelling notification for op=\(op), count=\(self.retryWindow.count)") timer!.invalidate() ``` [![enter image description here](https://i.stack.imgur.com/Q5YzW.png)](https://i.stack.imgur.com/Q5YzW.png)<issue_comment>username_1: Multi-threading is a possible cause. In between your `if` statement and your call to `invalidate`, `timer` could have been set to `nil` on another thread. Don't check against `nil`. Safely unwrap. ``` if let timer = timer { timer.invalidate() } ``` Upvotes: 1 <issue_comment>username_2: The easiest thing to do is use the power of magical optionals.. No `if let` or `guard`.. since your timer is already an optional, call the method on top of an optional, and everything should be taken care by compiler automatically. ``` timer?.invalidate() timer = nil ``` What it will do is call the method `invalidate` if `timer` exists. Upvotes: 1 [selected_answer]
2018/03/16
3,976
17,239
<issue_start>username_0: I have a active project that has always used C#, Entity Framework, and SQL Server. However, with the feasibility of NoSQL alternatives daily increasing, I am researching all the implications of switching the project to use MongoDB. It is obvious that the major transition hurdles would be due to being "schema-less". A good summary of what that implies for languages like C# is found [here](https://mongodb.github.io/mongo-csharp-driver/2.3/reference/bson/mapping/schema_changes/) in the official MongoDB documentation. Here are the most helpful relevant paragraphs (bold added): > > Just because MongoDB is schema-less does not mean that your code can > handle a schema-less document. Most likely, if you are using a > statically typed language like C# or VB.NET, then your code is not > flexible and needs to be mapped to a known schema. > > > There are a number of different ways that a schema can change from one > version of your application to the next. > > > How you handle these is up to you. There are two different strategies: > Write an upgrade script. Incrementally update your documents as they > are used. **The easiest strategy is to write an upgrade script. There is > effectively no difference to this method between a relational database > (SQL Server, Oracle) and MongoDB.** Identify the documents that need to > be changed and update them. > > > Alternatively, and not supportable in most relational databases, is > the incremental upgrade. The idea is that your documents get updated > as they are used. Documents that are never used never get updated. > Because of this, there are some definite pitfalls you will need to be > aware of. > > > First, queries against a schema where half the documents are version 1 > and half the documents are version 2 could go awry. For instance, if > you rename an element, then your query will need to test both the old > element name and the new element name to get all the results. > > > Second, any incremental upgrade code must stay in the code-base until > all the documents have been upgraded. For instance, if there have been > 3 versions of a document, [1, 2, and 3] and we remove the upgrade code > from version 1 to version 2, any documents that still exist as version > 1 are un-upgradeable. > > > The tooling for managing/creating such an initialization or upgrade scripts in SQL ecosystem is very mature (e.g. [Entity Framework Migrations](https://msdn.microsoft.com/en-us/library/jj591621(v=vs.113).aspx)) While there are [similar tools](https://github.com/emirotin/mongodb-migrations) and [homemade scripts](https://derickrethans.nl/managing-schema-changes.html) available for such upgrades in the NoSQL world ([though some believe there should not be](https://stackoverflow.com/questions/1961013/are-there-any-tools-for-schema-migration-for-nosql-databases)), there seems to be less consensus on "when" and "how" to run these upgrade scripts. [Some](https://blog.coinbase.com/how-we-do-mongodb-migrations-at-coinbase-47f18110d17f) suggest after deployment. Unfortunately this approach (when not used in conjunction with incremental updating) can leave the application in an unusable state when attempting to read existing data for which the C# model has changed. If > > "[The easiest strategy is to write an upgrade script.](https://mongodb.github.io/mongo-csharp-driver/2.3/reference/bson/mapping/schema_changes/)" > > > is truly the easiest/recommended approach for static .NET languages like C#, are there existing tools for code-first schema migration in NoSql Databases for those languages? or is the NoSql ecosystem not to that point of maturity? If you disagree with MongoDB's suggestion, what is a better implementation, and can you give some reference/examples of where I can see that implementation in use?<issue_comment>username_1: Short version ------------- > > Is "The easiest strategy is to write an upgrade script." is truly the easiest/recommended approach for static .NET languages like C#? > > > No. You could do that, but that's not the strength of NoSQL. Using C# does not change that. > > are there existing tools for code-first schema migration in NoSql Databases for those languages? > > > Not that I'm aware of. > > or is the NoSql ecosystem not to that point of maturity? > > > It's schemaless. I don't think that's the goal or measurement of maturity. Warnings -------- First off, I'm rather skeptical that just pushing an existing relational model to NoSql would in a general case solve more problems than it would create. SQL is for working with relations and on sets of data, noSQL is targeted for working with non-relational data: "islands" with few and/or soft relations. Both are good at what what they are targeting, but they are good at different things. **They are not interchangeable**. Not without serious effort in data redesign, team mindset and application logic change, possibly invalidating most previous technical design decision and having impact run up to architectural system properties and possibly up to user experience. Obviously, it may make sense in your case, but definitely **do the ROI math before committing**. Dealing with schema change -------------------------- Assuming you really have good reasons to switch, and schema change management is a key in that, I would suggest to **not fight the schemaless nature of NoSQL and embrace it instead**. Accept that your data will have different schemas. ### Don't do upgrade scripts .. unless you know your application data set will never-ever grow or change notably. [The other SO post you referenced](https://stackoverflow.com/a/3007620/331325) explains it really well. You just **can't rely on being able to do this in long term** and hence you need a plan B anyway. Might as well start with it and only use schema update scripts if it really is the simpler thing to do for that specific case. I would maybe add to the argumentation that a good NoSQL-optimized data model is usually optimized for single-item seeks and writes and mass-updates can be significantly heavier compared to SQL, i.e. to update a single field you may have to rewrite a larger portion of the document + maybe handle some denormalizations introduced to reduce the need of lookups in noSQL (and it may not even be transactional). So "large" in NoSql may happen to be significantly smaller and occur faster than you would expect, when measuring in upgrade down-time. ### Support multiple schemas concurrently Having different concurrently "active" schema versions is in practice expected since **there is no enforcement anyway** and that's the core feature you are buying into by switching to NoSQL in the first place. Ideally, in noSQL mindset, your logic should be able to work with any input data that meets the requirements a specific process has. It should depend on its required input not your storage model (which also makes universally sense for dependency management to reduce complexity). Maybe logic just depends on a few properties in a single type of document. It should not break if some other fields have changed or there is some extra data added as long as they are not relevant to given specific work to be done. Definitely it should not care if some other model type has had changes. This approach usually implies working on some soft value bags (JSON/dynamic/dictionary/etc). Even if the storage model is schema-less, then each business logic process has expectations about input model (schema subset) and it should validate it can work with what it's given. Persisted schema version number along model also helps in trickier cases. As a C# guy, I personally avoid working with dynamic models directly and prefer creating a **strongly typed objects to wrap each dynamic storage type**. To avoid having to manage N concurrent schema version models (with minimal differences) and constantly upgrade logic layer to support new schema versions, I would **implement it as a superset of all currently supported schema versions** for given entity and implement any interfaces you need. Of course you could add N more abstraction layers ;) Once some old schema versions have eventually phased out from data, you can simplify your model and get strongly typed support to reach all dependents. Also, it's important for **logic layer should have a fallback or reaction plan** should the input model NOT match the requirements for carrying out the intended logic. It's up to app when and where you can auto-upgrade, accept a discard, partial reset or have to direct to some trickier repair queue (up to manual fix if no automatics can cut it) or have to just outright reject the request due to incompatibility. Yes, there's the problem of querying across sets of models with different versions, so you should always consider those cases as well. You may have to adjust querying logic to query different versions separately and merge results (or accept partial results if acceptable). There definitely are tradeoffs to consider, sure. ### So, migrations? A downside (if you consider migrations tool set availability) is that you don't have one true schema to auto generate the model or it's changes as the **C# model IS the source-of-truth schema** you're currently supporting. Actually, quite similar to code-first mindset, but without migrations. You could implement an incoming model pipe which auto-upgrades the models as they are read and hence reduce the number schema versions you need to support upstream. I would say this is as close to migrations as you get. I don't know any tools to do this for you automatically and I'm not sure I would want it to. There are trade-offs to consider, for example some clients consuming the data may get upgraded with different time-line etc. Upgrade to latest may not always be what you want. Conclusion ---------- NoSQL is by definition not SQL. Both are cool, but expecting equivalency or interchangeability is bound for trouble. You still have to consider and manage schema in NoSQL, but if you want one true enforced & guaranteed schema, then consider SQL instead. Upvotes: 5 <issue_comment>username_2: While Imre's answer is really great and I agree with it in every detail I would like to add more to it but also trying to not duplicate information. Short version ------------- If you plan to migrate your existing C#/EF/SQL project to MongoDB it is a high chance that you shouldn't. It probably works quite well for some time, the team knows it and probably hundreds or more bugs have been already fixed and users are more or less happy with it. This is the real value that you already have. And I mean it. For reasons why you should not replace old code with new code see here: <https://www.joelonsoftware.com/2000/04/06/things-you-should-never-do-part-i/>. Also more important than existence of tools for any technology is that it brings value and it works as promised (tooling is secondary). Disclaimers ----------- 1. I do not like the explanation from mongoDB you cited that claims that statically typed language is an issue here. It is true but only on a basic, superficial level. More on this later. 2. I do not agree that EF Code First Migration is very mature - though it is really great for development and test environments and it is much, much better than previous .NET database-first approaches but still you have to have your own careful approach for production deployments. 3. Investing in your own tooling should not be a blocker for you. In fact if the engine you choose would be really great it is worthwhile to write some specific tooling around it. I believe that great teams rarely use tooling "off the shelves". They rather choose technologies wisely and then customize tools to their needs or build new tools around it (probably selling the tool a year or two years later). Where the front line lays ------------------------- It is not between statically and dynamically typed languages. This difference is highly overrated. It is more about problem at hand and nature of your schema. Part of the schema is quite static and it will play nicely both in static and dynamic "world" but other part can be naturally changing with time and it fits better for dynamically typed languages but not in the essence of it. You can easily write code in C# that has a list of pairs (key, value) and thus have dynamism under control. What dynamically typed languages gives you is impression that you call properties directly while in C# you access it by "key". While being easier and prettier to use for developer it does not save you from bigger problems like deploy schema changes, access different versions of schemas etc. So static/dynamic languages case is not an issue here at all. It is rather drawing a line between data **that you want to control from your code** (that is involved in any logic) and the other part that you do not have to control strictly. The second part do not have to be explicitly and minutely expressed in schema in your code (it can be rather list or dictionary than named fields/properties because maintaining such fields costs you but does not brings any value). My Use Case ----------- Once upon a time my team has made a project that uses three different databases: 1. SQL for "usual" configuration and evidence stuff 2. Graph database to make it natural to build wide network of arbitrarily connected objects 3. Document database tuned for searching (Elastic Search in fact) to make searching instant and really modern (like dealing with typos or the like) Of course it is a challenge to deploy such wide technology stack but each part of it brings its best to the whole solution. The aim of the project is to search through a knowledge base of literally anything (projects, peoples, books, products, documents, simply anything). That's why SQL is here only to record a list of available "knowledge databases" and users assigned to them. The schema here is obvious, stable and trivial. There is low probability of changes in the future. Next, graph database allows to literally "throw" anything into the database from different sources around and connect things with each other. The idea, to put it simply, is to have objects accessible by ID. Next, Elastic search is here to accumulate IDs and a selected subset of properties to make them searchable in the instant. Here the schema contains only ID and list of pairs (key, value). As the final step, to put it simply, the solution calls Elastic Search, gets Ids and displays details (schema is irrelevant as we treat it as a list of pairs key x value, so GUI is prepared to build screens dynamically). Though the way to the solution was really painful. We tested a few graph databases by running concept proofs to find that most of them simply does not work in operations like updating data! (ugh!!!) Finally we have found one good enough DB. On the other hand finding and using Elastic Search was a great pleasure! Though being great you have to be aware that under pressure of uploading massive data it can break so **you have to adjust your tooling to adapt to it**. (so no silver bullet here). Going into more widely used direction ------------------------------------- Apart from my use case which is kind of extreme usually you have sth "in-between". For example a database for documents. It can have almost static "header" of fields like ID, name, author, and so on and your code can manage it "traditionally" but all other fields could be managed in a way that it can exists or not and can have different contents or structure. "The header" is the part you decided to make it relevant for the project and controllable by the project. The rest is rather accompanying than crucial (from the project logic point of view). Different approaches -------------------- I would rather recommend to learn about strengths of particular NoSQL database types, find answers why were they created, why are they popular and useful. Then answer in which way they can bring benefits to your project. **BTW. This is interesting why you have indicated MongoDB?** The other way around would be to answer what are your project's current greatest weaknesses or greatest challenges from technological point of view - being it performance, cost of support changes, need to scale significantly or other. Then try to answer if some NoSQL DB would be great at resolving the issue. Conclusion ---------- I'm sure you can find benefits of NoSQL databases to your project either by replacing part of it or by bringing new values to users (searching for example?). Either way I would prefer a really good technology that brings what it promises rather than looking if it is fully supported by tools around it. And also concept proof is a really good tool to check technologies in a scenario that is very simple but at the same time meaningful for you. But the approach should be not to play with technologies but aggressively and quickly prove or disprove quality of them. There are so much promises and advertises around that we should protect ourselves by focusing of the real things that works. Upvotes: 3
2018/03/16
375
1,009
<issue_start>username_0: I have array with values sourceArray and I have array with values to replace toReplace. I want to replace all values that are in sourceArray that are equal to values in toReplace array. Is there some smart way to do it in Python ? E.g. ``` sourceArray = [0,1,2,3,4,5,5,6,7] toReplace = [5,6] ``` After replace I want to have ``` sourceArray = [0,1,2,3,4,0,0,0,7] ```<issue_comment>username_1: List comprehension with conditional expression: ``` [0 if i in toReplace else i for i in sourceArray] ``` If the `toReplace` list is too big, it's preferable to make it a `set` to get O(1) lookup. **Example:** ``` In [21]: sourceArray = [0,1,2,3,4,5,5,6,7] ...: toReplace = [5,6] ...: In [22]: [0 if i in toReplace else i for i in sourceArray] Out[22]: [0, 1, 2, 3, 4, 0, 0, 0, 7] ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: you Can use list comprehensions: ``` new_list = [x if toReplace.count(x)==0 else 0 for x in sourceArray] ``` Upvotes: 1
2018/03/16
1,988
5,678
<issue_start>username_0: Let's say we have this datetime: ``` var d = new Date("Sat Jul 21 2018 14:00:00 GMT+0200"); ``` Exporting it as a string (`console.log(d)`) gives inconsistent results among browsers: * `Sat Jul 21 2018 14:00:00 GMT+0200 (Paris, Madrid (heure d’été))` with Chrome * `Sat Jul 21 14:00:00 UTC+0200 2018` with Internet Explorer, etc. so we can't send datetime to a server with an **unconsistent format**. The natural idea then would be to ask for an ISO8601 datetime, and use `d.toISOString();` but it gives the UTC datetime: `2018-07-21T12:00:00.000Z` whereas I would like the local-timezone time instead: ``` 2018-07-21T14:00:00+0200 or 2018-07-21T14:00:00 ``` **How to get this (without relying on a third party dependency like momentjs)?** I tried this, which *seems to work*, but **isn't there a more natural way to do it?** ```js var pad = function(i) { return (i < 10) ? '0' + i : i; }; var d = new Date("Sat Jul 21 2018 14:00:00 GMT+0200"); Y = d.getFullYear(); m = d.getMonth() + 1; D = d.getDate(); H = d.getHours(); M = d.getMinutes(); S = d.getSeconds(); s = Y + '-' + pad(m) + '-' + pad(D) + 'T' + pad(H) + ':' + pad(M) + ':' + pad(S); console.log(s); ```<issue_comment>username_1: There is limited built-in support for formatting date strings with timezones in ECMA-262, there is either implementation dependent *toString* and *toLocaleString* methods or *toISOString*, which is always UTC. It would be good if *toISOString* allowed a parameter to specify UTC or local offset (where the default is UTC). Writing your own function to generate an ISO 8601 compliant timestamp with local offset isn't difficult: ```js function toISOLocal(d) { var z = n => ('0' + n).slice(-2); var zz = n => ('00' + n).slice(-3); var off = d.getTimezoneOffset(); var sign = off > 0? '-' : '+'; off = Math.abs(off); return d.getFullYear() + '-' + z(d.getMonth()+1) + '-' + z(d.getDate()) + 'T' + z(d.getHours()) + ':' + z(d.getMinutes()) + ':' + z(d.getSeconds()) + '.' + zz(d.getMilliseconds()) + sign + z(off/60|0) + ':' + z(off%60); } console.log(toISOLocal(new Date())); ``` Upvotes: 6 [selected_answer]<issue_comment>username_2: My version: ```js // https://stackoverflow.com/questions/10830357/javascript-toisostring-ignores-timezone-offset/37661393#37661393 // https://stackoverflow.com/questions/49330139/date-toisostring-but-local-time-instead-of-utc/49332027#49332027 function toISOLocal(d) { const z = n => ('0' + n).slice(-2); let off = d.getTimezoneOffset(); const sign = off < 0 ? '+' : '-'; off = Math.abs(off); return new Date(d.getTime() - (d.getTimezoneOffset() * 60000)).toISOString().slice(0, -1) + sign + z(off / 60 | 0) + ':' + z(off % 60); } console.log(toISOLocal(new Date())); ``` Upvotes: 1 <issue_comment>username_3: i have found a solution which has worked for me. see this post: [Modifying an ISO Date in Javascript](https://stackoverflow.com/questions/23684603/) for myself i tested this with slight modification to remove the "T", and it is working. here is the code i am using: ``` // Create date at UMT-0 var date = new Date(); // Modify the UMT + 2 hours date.setHours(date.getHours() + 2); // Reformat the timestamp without the "T", as YYYY-MM-DD hh:mm:ss var timestamp = date.toISOString().replace("T", " ").split(".")[0]; ``` and an alternative method is stipulate the format you need, like this: ``` // Create the timestamp format var timeStamp = Utilities.formatDate(new Date(), "GMT+2", "yyyy-MM-dd' 'HH:mm:ss"); ``` note: these are suitable in locations that do not have daylight saving changes to the time during the year it has been pointed out that the above formulas are for a specific timezone. in order to have the local time in ISO format, first specify suitable Locale ("sv-SE" is the closest and easiest to modify), then make modification (change the space to a T) to be same as ISO format. like this: ``` var date = new Date(); // Create date var timestamp = date.toLocaleString("sv-SE").replace(" ", "T").split(".")[0]; // Reformat the Locale timestamp ISO YYYY-MM-DDThh:mm:ss ``` References: <https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/toLocaleString>) <https://www.w3schools.com/Jsref/jsref_tolocalestring.asp> <https://www.w3schools.com/Jsref/tryit.asp?filename=tryjsref_tolocalestring_date_all> Upvotes: 0 <issue_comment>username_4: The trick is to adjust the time by the timezone, and *then* use `toISOString()`. You can do this by creating a new date with the original time and subtracting by the timezone offssetfrom the original time: ```js var d = new Date("Sat Jul 21 2018 14:00:00 GMT+0200"); var newd = new Date(d.getTime() - d.getTimezoneOffset()*60000); console.log(newd.toISOString()); // 2018-07-21T22:00:00.000Z ``` Alternatively, you can simply adjust the original date variable: ```js var d = new Date("Sat Jul 21 2018 14:00:00 GMT+0200"); d.setTime(d.getTime() - d.getTimezoneOffset()*60000); console.log(d.toISOString()); // 2018-07-21T22:00:00.000Z ``` Note, however, that adjusting the original date this way will affect all of the date methods. For your convenience, the result from `.getTime()` is the number of *milliseconds* since 1 January 1970. However, `getTimezoneOffset()` gives a time zone difference from UTC in minutes; that’s why you need to multiply by `60000` to get this in milliseconds. Of course, the new time is still relative to UTC, so you’ll have to ignore the `Z` at the end: ```js d = d.slice(0,-1); // 2018-07-21T22:00:00.000 ``` Upvotes: 4
2018/03/16
947
3,927
<issue_start>username_0: It seems like you should be able to do this because building a form dynamically based off of a class definition (Angular) would work so much better if the logic could be written agnostically of the class. This would be scalable, so an addition of a field to the class would not also require an update to the logic producing the form and template. Is there any way to do this or even an NPM module that will do this? I found that I can do `ClassName.toString()` but it would be a pain to parse that. And I just might write a module to do it if nothing else. I just feel like instantiating a dummy instance of the class for the purpose of enumerating over its properties is a poor strategy.<issue_comment>username_1: **Any way?** Declare your class as a function, and put the properties on the prototype: ``` var Presidents = function() { }; Presidents.prototype = { "washington" : "george", "adams" : "john" } console.log(Object.keys(Presidents.prototype)) // Output is // [ 'washington', 'adams' ] ``` Upvotes: 0 <issue_comment>username_2: You could use `Object.getOwnPropertyNames()`. Example class: `class Foo { setBar() { throw Error('not implemented'); return false; } getBar() { throw Error('not implemented'); return false; } }` And then `Object.getOwnPropertyNames(Foo.prototype)` results in `["constructor", "setBar", "getBar"]` While I was researching this I looked into `Object.keys` first, and although it didn't work, you may wish to reference [the documentation for `Object.keys`'s polyfill](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/keys). It has code for stripping out `constructor`, `toString`, and the like, as well as properly implementing `hasOwnProperty`. Also see [Bergi's answer here](https://stackoverflow.com/questions/30158515/list-down-all-prototype-properties-of-an-javascript-object). Upvotes: 2 [selected_answer]<issue_comment>username_3: `getOwnPropertyDescriptors` of a class prototype will only expose methods and accessor descriptors - data properties can not be determined without instantiation (also because constructor arguments can influence the amount, types and values of props). There can be several reasons to not want to instantiate (e.g. because some static counter tracks instances) - therefore a workaround could be to dynamically create a copy of the class and instatiate that "shadow" along with sample constructor arguments. ```js /** * get properties from class definition without instantiating it * * @param cls: class * @param args: sample arguments to pass to shadow constructor * @usage `const props = getInstanceProperties(MyClass);` * @notice this will regex replace the classname (can be an issue with strings containing that substring) */ const getInstanceProperties = (cls, args = [], ignore = ['constructor', 'toString']) => { const className = cls.prototype.constructor.name; const shadowCode = cls.toString().replace(new RegExp(`${className}`, 'g'), `_${className}_`); const shadowClass = eval(`(${shadowCode})`); const o = new shadowClass(...args); const methodsAndAccessors = Object.getOwnPropertyDescriptors(cls.prototype); const dataDescriptors = Object.getOwnPropertyDescriptors(o); const descriptors = Object.assign({}, methodsAndAccessors, dataDescriptors); ignore.forEach(name => delete descriptors[name]); return descriptors; }; class Foo extends Object { static instances = 0; #myPrivateVar = 123; myValue=123; constructor(){ super(); this.myConstructorVar = ++Foo.instances; } myMethod() {} set myAccessor(x){} } console.log(Object.keys(getInstanceProperties(Foo))); ``` will return: `[ 'myMethod', 'myAccessor', 'myValue', 'myConstructorProp' ]` Upvotes: 0
2018/03/16
430
1,902
<issue_start>username_0: I've been trying to learn about WCF services and hosts. I made a simple host program to host my simple service. It works fine, but I don't understand how the host program can continue completing unrelated tasks after opening the service. Does the service run on a separate thread that opens behind the scenes? Or when my client calls the service, does that pause the host program? I don't see that documented anywhere. ``` namespace MyHostProgram { class Program { static void Main(string[] args) { var host = new ServiceHost(typeof(MyServices.Service1)); host.Open(); while (true) { Console.Writeline("Doing other tasks in host program"); } host.Close(); } } } ``` Note that I am not asking if adding another thread will speed things up like [WCF Service and Threading](https://stackoverflow.com/questions/3969324/wcf-service-and-threading), I'm asking what the default behavior is.<issue_comment>username_1: All that means is Open() spawns up a new thread that contains the transport receive-loop or registers for asynchronous callbacks (depending on the binding). You may consider reading on multi-threading and asynchronous programming to better grasp this. Hope this helps! Upvotes: -1 <issue_comment>username_2: When you call the Open function of the ServiceHost class, it creates and opens the listener for the service on the configured endpoints. It does this asynchronously and the control is given back to the calling thread. So the answers to your questions are: > > Does the service run on a separate thread that opens behind the scenes? > > > Yes > > when my client calls the service, does that pause the host program? > > > No Upvotes: 1 [selected_answer]
2018/03/16
1,081
3,817
<issue_start>username_0: I'm using serverless.yml to create a couple services in AWS cloudformation, specifically: cognitoUserPool and UserPoolClient. Both of these creations will return IDs that I will use on my flat html files with the cognito library to connect to amazon cognito, so, since I am serving flat files from S3, I need these values to be coded inside the files. Now I'm looking for a way of automating this, perhaps leaving a placeholder in the files and then running them through a preprocessor that changes the placeholders with the output values before uploading them to S3. Any ideas how this can be achieved? My first guess would be to export the output variables from serverless deploy and then use these values on a task runner.<issue_comment>username_1: To get outputs from `serverless` you can use the [serverless-stack-output](https://www.npmjs.com/package/serverless-stack-output) plugin or you can deduce the stack name and use the `aws` command. ``` aws cloudformation describe-stacks --stack-name SERVICE-STAGE --query Stacks[0].Outputs ``` Replace `SERVICE` with your service name and `STAGE` with your stage. You should get a JSON object with the outputs from this command. If you want to get just specific outputs, try: ``` aws cloudformation describe-stacks --stack-name SERVICE-STAGE --query 'Stacks[0].Outputs[?OutputKey==`OUTPUT_KEY`].OutputValue' --output text ``` Replace `SERVICE`, `STAGE` and `OUTPUT_KEY` with the values you want. On Windows use (the quotes work differently): ``` aws cloudformation describe-stacks --stack-name SERVICE-STAGE --query Stacks[0].Outputs[?OutputKey==`OUTPUT_KEY`].OutputValue --output text ``` For more details on `--query` see <https://docs.aws.amazon.com/cli/latest/userguide/controlling-output.html> Upvotes: 2 <issue_comment>username_2: To achieve this without using a Serverless plugin, add the follow to your package.json file: ``` "scripts": { "sls:info": "sls info --verbose | tee ./.slsinfo", } ``` This will create the file `.slsinfo` containing your serverless outputs (amongst other things). Run by calling `npm run sls:info` You can then update package.json: ``` "scripts": { "sls:deploy": "sls deploy && npm run sls:info", "sls:info": "sls info --verbose | tee .slsinfo", } ``` Now you can call `npm run sls:deploy` and it will deploy your service and add your outputs to .slsinfo file. To use the info in `.slsinfo` the easiest way I have found is to use regex. Example below: ``` const slsinfo = require('fs').readFileSync('./.slsinfo', 'utf8'); function getOutput(output) { return slsinfo.match(new RegExp('('+output+': )((.?)+)(\\n)'))[2]; } ``` Using the above method you can get your output as follow: ``` const var = getOutput('MyOutputName') ``` Upvotes: 3 <issue_comment>username_3: Follow up on [@username_1](https://stackoverflow.com/users/492773/username_1) answer. The `query` option can be extended to output TOML-like key-value-pairs in the format of `OutputKey=OutputValue`: ```bash aws cloudformation describe-stacks --stack-name nebula-api-dev --query 'join(`\n`, Stacks[0].Outputs[*].join(`=`, [OutputKey, OutputValue]))' --output text ``` Output: ``` LambdaFunctionQualifiedArn=arn:aws:lambda:... ServerlessDeploymentBucketName=stack... IamRoleLambdaExecution=arn:aws:iam::... ``` This output can directly be piped to an `.env` file: ```bash aws cloudformation describe-stacks --stack-name nebula-api-dev --query 'join(`\n`, Stacks[0].Outputs[*].join(`=`, [OutputKey, OutputValue]))' --output text > .env ``` And this file can automatically be loaded by a package like [dotenv](https://www.npmjs.com/package/dotenv) or by the [Serverless Framework](https://www.serverless.com/framework/docs/environment-variables#support-for-env-files). Upvotes: 0
2018/03/16
1,427
5,038
<issue_start>username_0: I am struggling to implement unit testing for action methods that incorporate `User.Identity.Name` functionality. The methods that I've come across fail because the properties that they suggest writing to throw 'read-only' errors (e.g. writing to `HttpContext` or the controller `User`) I have an action method: ``` [Authorize] public async Task EditProject(int projectId) { Project project = repository.Projects.FirstOrDefault(p => p.ProjectID == projectId); if (project != null) { //HOW DO I MOCK USER.IDENTITY.NAME FOR THIS PORTION? var user = await userManager.FindByNameAsync(User.Identity.Name); bool owned = await checkIfUserOwnsItem(project.UserID, user); if (owned) { return View(project); } else { TempData["message"] = $"User is not authorized to view this item"; } } return View("Index"); } ``` If I want to unit test this action method, how can I mock up the `User.Identity` object? ``` [Fact] public async Task Can_Edit_Project() { //Arrange var user = new AppUser() { UserName = "JohnDoe", Id = "1" }; Mock mockRepo = new Mock(); mockRepo.Setup(m => m.Projects).Returns(new Project[] { new Project {ProjectID = 1, Name = "P1", UserID = "1"}, new Project {ProjectID = 2, Name = "P2", UserID = "1"}, new Project {ProjectID = 3, Name = "P3", UserID = "1"}, }); Mock tempData = new Mock(); Mock> userMgr = GetMockUserManager(); //Arrange ProjectController controller = new ProjectController(mockRepo.Object, userMgr.Object) { TempData = tempData.Object, }; //HOW WOULD I MOCK THE USER.IDENTITY.NAME HERE? //The example below causes two errors: // 1) 'Invalid setup on a non-virtual (overridable in VB) member //mock => mock.HttpContext //and 2) HttpContext does not contain a definition for IsAuthenticated var mock = new Mock(); mock.SetupGet(p => p.HttpContext.User.Identity.Name).Returns(user.UserName); mock.SetupGet(p => p.HttpContext.Request.IsAuthenticated).Returns(true); //Act var viewResult = await controller.EditProject(2); Project result = viewResult.ViewData.Model as Project; //Assert Assert.Equal(2, result.ProjectID); } ``` EDIT: Making some progress by adding the code below. ``` var claims = new List() { new Claim(ClaimTypes.Name, "<NAME>"), new Claim(ClaimTypes.NameIdentifier, "1"), new Claim("name", "<NAME>"), }; var identity = new ClaimsIdentity(claims, "TestAuthType"); var claimsPrincipal = new ClaimsPrincipal(identity); var mockPrincipal = new Mock(); mockPrincipal.Setup(x => x.Identity).Returns(identity); mockPrincipal.Setup(x => x.IsInRole(It.IsAny())).Returns(true); var mockHttpContext = new Mock(); mockHttpContext.Setup(m => m.User).Returns(claimsPrincipal); ``` The `User.Identity.Name` is set properly now, but the line below still returns a `user = null` `var user = await userManager.FindByNameAsync(User.Identity.Name);` How can I make sure my mocked `UserManager` can return a mocked logged in user?<issue_comment>username_1: You can create fakeContext and use that. See below: ``` var fakeContext = new Mock(); var fakeIdentity = new GenericIdentity("User"); var principal = new GenericPrincipal(fakeIdentity, null); fakeContext.Setup(x => x.User).Returns(principal); var projectControllerContext = new Mock(); projectControllerContext.Setup(x => x.HttpContext).Returns(fakeContext.Object); ``` Upvotes: 2 <issue_comment>username_2: Set your fake `User` through `ControllerContext` ``` var context = new ControllerContext { HttpContext = new DefaultHttpContext { User = fakeUser } }; // Then set it to controller before executing test controller.ControllerContext = context; ``` Upvotes: 4 <issue_comment>username_3: To unit test my action method that uses ``` var user = await userManager.FindByNameAsync(User.Identity.Name); ``` I needed to: 1. set up my user `var user = new AppUser() { UserName = "JohnDoe", Id = "1" };` 2. Set up my HttpContext to give data to return `user.UserName` in the `User.Identity.Name` object in the controller ``` var claims = new List() { new Claim(ClaimTypes.Name, user.UserName), new Claim(ClaimTypes.NameIdentifier, user.Id), new Claim("name", user.UserName), }; var identity = new ClaimsIdentity(claims, "Test"); var claimsPrincipal = new ClaimsPrincipal(identity); var mockPrincipal = new Mock(); mockPrincipal.Setup(x => x.Identity).Returns(identity); mockPrincipal.Setup(x => x.IsInRole(It.IsAny())).Returns(true); var mockHttpContext = new Mock(); mockHttpContext.Setup(m => m.User).Returns(claimsPrincipal); ``` 3. Setup the mock `UserManager` to return the `user` object on the `FindByNameAsync` method ``` Mock> userMgr = GetMockUserManager(); userMgr.Setup(x => x.FindByNameAsync(It.IsAny())).ReturnsAsync(user); ``` Edit: ``` public Mock> GetMockUserManager() { var userStoreMock = new Mock>(); return new Mock>( userStoreMock.Object, null, null, null, null, null, null, null, null); } ``` Upvotes: 4 [selected_answer]
2018/03/16
5,913
13,651
<issue_start>username_0: have the below RSA Private key with me and I have to generate a JWT token using **RS256 algorithm**. Here is my sample **Private Key**: ``` -----<KEY> ``` **Header** ``` { "alg": "RS256", "typ": "JWT" } ``` **Body**: ``` { "iss": "14a2fecb-ddd7-4823-46e2-67515bc01734", "sub": "13f7982d-1f78-46e2-4823-3273568fce89", "iat": 1521132568, "exp": 1522136156, "aud": "account-d.example.com", "scope": "signature" } ``` Below is my sample Java code: ``` package com.knyc.demo; import java.security.spec.PKCS8EncodedKeySpec; import org.bouncycastle.util.encoders.Base64; import org.mule.api.MuleEventContext; import org.mule.api.lifecycle.Callable; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import io.jsonwebtoken.*; import java.util.Date; import javax.crypto.spec.SecretKeySpec; import javax.xml.bind.DatatypeConverter; import java.security.Key; public class GenerateJwtToken implements Callable{ protected final Logger logger = LoggerFactory.getLogger(getClass()); @Override public String onCall(MuleEventContext eventContext) throws Exception { String issuer = "14a2fecb-ddd7-4823-a9cc-67515bc01734"; String scope = "signature"; String subject = "13f7982d-1f78-46e2-a843-3273568fce89"; String audience = "account-d.docusign.com"; String privateKeyPEM ="-----BEGIN RSA PRIVATE KEY-----\n" + "<KEY>" + "<KEY>" + "<KEY>" + "<KEY>" + "<KEY>" + "<KEY>" + "<KEY>" + "<KEY>z<KEY>" + "<KEY>" + "<KEY>" + "<KEY>" + "<KEY>" + "<KEY>" + "<KEY>" + "<KEY>" + "<KEY>" + "<KEY>" + "<KEY>U+<KEY>" + "<KEY>" + "<KEY>" + "<KEY>" + "<KEY>" + "<KEY>" + "<KEY>" + "<KEY>" + "-----END RSA PRIVATE KEY-----"; String privKeyPEM = privateKeyPEM.replace("-----BEGIN RSA PRIVATE KEY-----\n", ""); privKeyPEM = privKeyPEM.replace("-----END RSA PRIVATE KEY-----", ""); /* byte [] encoded = Base64.decode(privKeyPEM); PKCS8EncodedKeySpec keySpec = new PKCS8EncodedKeySpec(encoded); */ try { SignatureAlgorithm signatureAlgorithm = SignatureAlgorithm.RS256; long nowMs = System.currentTimeMillis()/1000; long expMs = nowMs + 3600; Date now = new Date(nowMs); Date exp = new Date(expMs); byte[] apiKeySecretBytes = DatatypeConverter.parseBase64Binary(privKeyPEM); Key signingKey = new SecretKeySpec(apiKeySecretBytes, signatureAlgorithm.getJcaName()); System.out.println(signingKey); JwtBuilder builder = Jwts.builder() .setIssuedAt(now) .setSubject(subject) .setIssuer(issuer) .setAudience(audience) .claim("scope",scope) .signWith(signatureAlgorithm, signingKey) .setExpiration(exp); return builder.compact(); } catch (Exception e) { e.printStackTrace(); return ""; } } } ``` It is throwing exception saying: "RSA signatures must be computed using an RSA PrivateKey. The specified key of type javax.crypto.spec.SecretKeySpec is not an RSA PrivateKey." Thanks in Advance...<issue_comment>username_1: You could use this library <https://github.com/jwtk/jjwt> ``` package ; import java.security.KeyFactory; import java.security.NoSuchAlgorithmException; import java.security.PrivateKey; import java.security.spec.InvalidKeySpecException; import java.security.spec.PKCS8EncodedKeySpec; import io.jsonwebtoken.Claims; import io.jsonwebtoken.Jwts; public class JWT { private String privateKey; public JWT(String privateKey) { this.privateKey = privateKey; } public String encode() { String retStr = null; Claims claims = Jwts.claims(); claims.put("issuer", "14a2fecb-ddd7-4823-a9cc-67515bc01734"); claims.put("scope", "signature"); claims.put("subject", "13f7982d-1f78-46e2-a843-3273568fce89"); claims.put("audience", "account-d.docusign.com"); // strip the headers privateKey = privateKey.replace("-----BEGIN RSA PRIVATE KEY-----", ""); privateKey = privateKey.replace("-----END RSA PRIVATE KEY-----", ""); privateKey = privateKey.replaceAll("\\s+",""); byte[] encodedKey = android.util.Base64.decode(this.privateKey, android.util.Base64.DEFAULT); PKCS8EncodedKeySpec keySpec = new PKCS8EncodedKeySpec(encodedKey); try { KeyFactory kf = KeyFactory.getInstance("RSA"); PrivateKey privKey = kf.generatePrivate(keySpec); retStr = Jwts.builder().setClaims(claims).signWith(privKey).compact(); } catch (NoSuchAlgorithmException e) { e.printStackTrace(); } catch (InvalidKeySpecException e) { e.printStackTrace(); } return retStr; } } ``` Upvotes: 3 <issue_comment>username_2: If some of you is still struggling in generating a jwt Token especially for Docusign Auth services maybe this example can work also for you : Before you start , use this command on linux box in order to convert your RSA private key in the correct format : copy and paste your key in a file and launch : ``` openssl pkcs8 -topk8 -nocrypt -in privatekeyOLD -out privatekeyNEW ``` After copy again the new key generated ,and paste inside your code inside the variable **String rsaPrivateKey** (maybe yuo'll remove some extra chars generating during the copy and paste process) As a reference I've used this <https://www.viralpatel.net/java-create-validate-jwt-token/> Required JDK 11 ``` import io.jsonwebtoken.Jwts; import java.security.KeyFactory; import java.security.NoSuchAlgorithmException; import java.security.PrivateKey; import java.security.spec.InvalidKeySpecException; import java.security.spec.PKCS8EncodedKeySpec; import java.time.Instant; import java.time.temporal.ChronoUnit; import java.util.Base64; import java.util.Date; import java.util.UUID; public class JWTIO{ // reference https://www.viralpatel.net/java-create-validate-jwt-token/ public static String createJwtSignedHMAC() throws InvalidKeySpecException, NoSuchAlgorithmException { PrivateKey privateKey = getPrivateKey(); Instant now = Instant.now(); String jwtToken = Jwts.builder() .setIssuer("2a03dbc6-XXXX-XXXXX-XXXX-7e9ac9df613f") .setSubject("2a285ff3-XXXX-XXXX-XXXXX-e433497afc23") .setAudience("account-d.docusign.com") .claim("scope","signature impersonation") .setIssuedAt(Date.from(now)) .setExpiration(Date.from(now.plus(5l, ChronoUnit.MINUTES))) .signWith(privateKey) .compact(); return jwtToken; } private static PrivateKey getPrivateKey() throws NoSuchAlgorithmException, InvalidKeySpecException { /* before to put the rsa private key in your String you must convert in PKCSS#8 * THE CERTIFICATE PROVIDED BY DOCUSIGN IS PKCS#1 and does not works * copy the rsa provate key ina file and use this command in linux ubuntu for * example * openssl pkcs8 -topk8 -nocrypt -in privatekeyOLD -out privatekeyNEW * */ String rsaPrivateKey = "-----BEGIN PRIVATE KEY-----" + "<KEY>" + "<KEY>/sLv/<KEY>" + "KdpOMIoPDGCF8KCXA4F9hM/WXJstprEeqPi7a1FnXzi3uwaf+jy2zUviDt09jm+g" + "uu5TaZzTuzyoClEgfwIu2LwuHaJKtjceRKHWKvUqqxBQMXlq/s5lNXWajWSpLffP" + "/5L0YfCK9uW1FJJXT4cP9fiNbcd6GiHGwVk/eJy4QYaR9lWNvnKKu1H52xgWkLC+" + "TwfOtcqtjF9sWE5XIAjpdFVh5u64g/uOafKzEF/yVOTzPvYUWI3PhKjqSS3V3yhk" + "TqoFG53hAgMBAAECggEALo0NEgdkCRsK0XjUsurQb/vvx1nXSglQ+HLNwFCC0Yqq" + "HPpaVccu4ILejoJyl7zwWIBmLX+uhxXZrgT4MeXnvDnFmYjY8vfox0l0vm+QnO6c" + "0qXW+Ymy9PbG8BszmeVUc6l+zmuLL8eLWiGUYSjAESAYzupkAV02hEzx9XBjnWWl" + "ifoWOXvO22ADtO8jRk1ODbOrqyt1Hz7UDLtQI6Vdw3QovadW/3hKCx+0a/WxgDhR" + "VosPudbzIGYBzdnbOyT+ToVIyMBTJU/8muZbWsaaOXHhel9lD/CUjvCcivL5tcSU" + "0KvEiHVCXWfojbuy/RksgSTvl/aFEOrmqRjyu2JEbQKBgQDt+B4r6NVuqOPus+Xb" + "RHF8QpvtwzIWSxxAwbtAWxWDJSrMMlhAx1yZ4kefSxbxAkdNkv9vQoVabVAEiWdJ" + "VzXB8W9tvcD57zrbiraHQI5hCn+t5GIsSTnbhg6CG2dxv58uWviDneeNEdDeki+b" + "vwTTXuHIeyCnPdLI/vaMzO0clwKBgQCUdVm5IYxRd93ohYH14zIfty+7J301iB3t" + "t/fccrg1qjx4WniLbhLweVYdL44XpzMZUzdAYFhyHUIyAWIR4yHC3b0+u34oshjv" + "MD/gjWPiNTBBthYTy559todm3jyj+g9Z+gLLu2G93+wGl3u15igiTy0oMmh4bS3s" + "xtFk1pLQRwKBgF8L+wEOvjC0xFVTBTvO2oUHFcChdh/xYBd9SY0q1CzNa4qjkRxO" + "hG3yMykslL0ua8xQKjYGG71Ca/Nj7h0c+Bu+kwMCB1HMe3W0sbLT1gpsZxLNZWjK" + "1pEXujO9PlPwdWPOcfQf3Zw6wXIkcV+DrCnAe+3XP/OMfeRJ8a/LKemBAoGAf46M" + "9wqiO+WYH4+G6LS7fpCxTEdTx8kangQxzZIsQL/ykR568KI1V7WJji4sEpqwxxO/" + "J2sg03vcQob5spDLk1lenyYN8f2Eew+j8tbJebVlrzA6q+uKVE2e7X4J8IKM6ixs" + "doycIL7jV46U1ufYmBIbpKwbI0375bO2esP7BUUCgYBqmx7GJnyOapOYlR7wHhYL" + "rXk0QWwE7j3d4zQCHGOqzFqWxyIi1hsQYCwgOeobmd0r5kULRRptYBKvflEiboVq" + "RL3VeyR9ZIEDkbCUewwf2qn6EoOCfi7x6/36brhn8r3mWC9rvNiKB33iBJrkin1p" + "f0aUgyrlhk1aMnDDBFFb8A==" + "-----END PRIVATE KEY-----"; rsaPrivateKey = rsaPrivateKey.replace("-----BEGIN PRIVATE KEY-----", ""); rsaPrivateKey = rsaPrivateKey.replace("-----END PRIVATE KEY-----", ""); PKCS8EncodedKeySpec keySpec = new PKCS8EncodedKeySpec(Base64.getDecoder().decode(rsaPrivateKey)); KeyFactory kf = KeyFactory.getInstance("RSA"); PrivateKey privKey = kf.generatePrivate(keySpec); return privKey; } } ``` Required libraries ``` io.jsonwebtoken jjwt-api 0.11.2 io.jsonwebtoken jjwt-impl 0.11.2 runtime io.jsonwebtoken jjwt-jackson 0.11.2 runtime ``` To invoke this method you must surrond it with try catch, and you'll be ready to go example : ``` String bearer = null; try { bearer = JWTIO.createJwtSignedHMAC(); } catch (InvalidKeySpecException e) { e.printStackTrace(); } catch (NoSuchAlgorithmException e) { e.printStackTrace(); } ``` Upvotes: 1 <issue_comment>username_3: It is not working because of the line separators. You should also remove the \n by ``` privKeyPEM = privKeyPEM.replaceAll("\n", ""); ``` after ``` privKeyPEM = privKeyPEM.replace("-----END RSA PRIVATE KEY-----", ""); ``` Upvotes: 1
2018/03/16
3,239
9,068
<issue_start>username_0: I am not sure what has happened, but all of a sudden I am getting weird behaviour in my controller. The controller is set up to return a json serialized object like so: ``` return Ok(JsonConvert.SerializeObject(result, Formatting.Indented)); ``` It was working correctly but now it is returning this instead: ``` "{\r\n \"ProductID\": 1,\r\n \"ArticleID\": \"a1\",\r\n \"ProductDescription\": \"new dress\",\r\n \"ProductType\": \"1\",\r\n \"MaterialDescription\": \"cotton\",\r\n \"Qty\": 1,\r\n \"SizeID\": 1,\r\n \"RetailPrice\": 22.00,\r\n \"ImagePath\": \"C:\\\\Users\\\\Harry\\\\Desktop\\\\IMG_8931.JPG\",\r\n \"ProductVaritiesID\": 1,\r\n \"Discount\": 0.00,\r\n \"QuantityInStock\": 10,\r\n \"ProductTypeID\": 1,\r\n \"ProductOrderQuantity\": 0\r\n}" ``` However, if I return this object without serializing like so: ``` return Ok(result); ``` It returns the an object like so: ``` { "ProductID": 1, "ArticleID": "a1", "ProductDescription": "new dress", "ProductType": "1", "MaterialDescription": "cotton", "Qty": 1, "SizeID": 1, "RetailPrice": 22, "ImagePath": "C:\\Users\\Harry\\Desktop\\IMG_8931.JPG", "ProductVaritiesID": 1, "Discount": 0, "QuantityInStock": 10, "ProductTypeID": 1, "ProductOrderQuantity": 0 } Controller: [EnableCors(origins: "*", headers: "*", methods: "GET")] [Route("api/product/getproductbybarcode")] [ResponseType(typeof(Product))] public IHttpActionResult GetProductByBarcode(string barcodevalue) { if (string.IsNullOrEmpty(barcodevalue)) return NotFound(); //replace with correct message var result = _iproduct.GetProductByBarcode(barcodevalue); return Ok(JsonConvert.SerializeObject(result, Formatting.Indented)); } ``` I cannot figure out, what is causing the Json serialization to fail?<issue_comment>username_1: You could use this library <https://github.com/jwtk/jjwt> ``` package ; import java.security.KeyFactory; import java.security.NoSuchAlgorithmException; import java.security.PrivateKey; import java.security.spec.InvalidKeySpecException; import java.security.spec.PKCS8EncodedKeySpec; import io.jsonwebtoken.Claims; import io.jsonwebtoken.Jwts; public class JWT { private String privateKey; public JWT(String privateKey) { this.privateKey = privateKey; } public String encode() { String retStr = null; Claims claims = Jwts.claims(); claims.put("issuer", "14a2fecb-ddd7-4823-a9cc-67515bc01734"); claims.put("scope", "signature"); claims.put("subject", "13f7982d-1f78-46e2-a843-3273568fce89"); claims.put("audience", "account-d.docusign.com"); // strip the headers privateKey = privateKey.replace("-----BEGIN RSA PRIVATE KEY-----", ""); privateKey = privateKey.replace("-----END RSA PRIVATE KEY-----", ""); privateKey = privateKey.replaceAll("\\s+",""); byte[] encodedKey = android.util.Base64.decode(this.privateKey, android.util.Base64.DEFAULT); PKCS8EncodedKeySpec keySpec = new PKCS8EncodedKeySpec(encodedKey); try { KeyFactory kf = KeyFactory.getInstance("RSA"); PrivateKey privKey = kf.generatePrivate(keySpec); retStr = Jwts.builder().setClaims(claims).signWith(privKey).compact(); } catch (NoSuchAlgorithmException e) { e.printStackTrace(); } catch (InvalidKeySpecException e) { e.printStackTrace(); } return retStr; } } ``` Upvotes: 3 <issue_comment>username_2: If some of you is still struggling in generating a jwt Token especially for Docusign Auth services maybe this example can work also for you : Before you start , use this command on linux box in order to convert your RSA private key in the correct format : copy and paste your key in a file and launch : ``` openssl pkcs8 -topk8 -nocrypt -in privatekeyOLD -out privatekeyNEW ``` After copy again the new key generated ,and paste inside your code inside the variable **String rsaPrivateKey** (maybe yuo'll remove some extra chars generating during the copy and paste process) As a reference I've used this <https://www.viralpatel.net/java-create-validate-jwt-token/> Required JDK 11 ``` import io.jsonwebtoken.Jwts; import java.security.KeyFactory; import java.security.NoSuchAlgorithmException; import java.security.PrivateKey; import java.security.spec.InvalidKeySpecException; import java.security.spec.PKCS8EncodedKeySpec; import java.time.Instant; import java.time.temporal.ChronoUnit; import java.util.Base64; import java.util.Date; import java.util.UUID; public class JWTIO{ // reference https://www.viralpatel.net/java-create-validate-jwt-token/ public static String createJwtSignedHMAC() throws InvalidKeySpecException, NoSuchAlgorithmException { PrivateKey privateKey = getPrivateKey(); Instant now = Instant.now(); String jwtToken = Jwts.builder() .setIssuer("2a03dbc6-XXXX-XXXXX-XXXX-7e9ac9df613f") .setSubject("2a285ff3-XXXX-XXXX-XXXXX-e433497afc23") .setAudience("account-d.docusign.com") .claim("scope","signature impersonation") .setIssuedAt(Date.from(now)) .setExpiration(Date.from(now.plus(5l, ChronoUnit.MINUTES))) .signWith(privateKey) .compact(); return jwtToken; } private static PrivateKey getPrivateKey() throws NoSuchAlgorithmException, InvalidKeySpecException { /* before to put the rsa private key in your String you must convert in PKCSS#8 * THE CERTIFICATE PROVIDED BY DOCUSIGN IS PKCS#1 and does not works * copy the rsa provate key ina file and use this command in linux ubuntu for * example * openssl pkcs8 -topk8 -nocrypt -in privatekeyOLD -out privatekeyNEW * */ String rsaPrivateKey = "-----BEGIN PRIVATE KEY-----" + "<KEY>" + "<KEY>" + "<KEY>" + "<KEY>" + <KEY> + "<KEY>" + "<KEY>" + "<KEY>0vm+QnO6c" + "0qXW+Ymy9PbG8BszmeVUc6l+zmuLL8eLWiGUYSjAESAYzupkAV02hEzx9XBjnWWl" + "ifoWOXvO22ADtO8jRk1ODbOrqyt1Hz7UDLtQI6Vdw3QovadW/3hKCx+0a/WxgDhR" + "VosPudbzIGYBzdnbOyT+ToVIyMBTJU/8muZbWsaaOXHhel9lD/CUjvCcivL5tcSU" + "0KvEiHVCXWfojbuy/RksgSTvl/aFEOrmqRjyu2JEbQKBgQDt+B4r6NVuqOPus+Xb" + "RHF8QpvtwzIWSxxAwbtAWxWDJSrMMlhAx1yZ4kefSxbxAkdNkv9vQoVabVAEiWdJ" + "VzXB8W9tvcD57zrbiraHQI5hCn+t5GIsSTnbhg6CG2dxv58uWviDneeNEdDeki+b" + "vwTTXuHIeyCnPdLI/vaMzO0clwKBgQCUdVm5IYxRd93ohYH14zIfty+7J301iB3t" + "t/fccrg1qjx4WniLbhLweVYdL44XpzMZUzdAYFhyHUIyAWIR4yHC3b0+u34oshjv" + "MD/gjWPiNTBBthYTy559todm3jyj+g9Z+gLLu2G93+wGl3u15igiTy0oMmh4bS3s" + "xtFk1pLQRwKBgF8L+wEOvjC0xFVTBTvO2oUHFcChdh/xYBd9SY0q1CzNa4qjkRxO" + "hG3yMykslL0ua8xQKjYGG71Ca/Nj7h0c+Bu+kwMCB1HMe3W0sbLT1gpsZxLNZWjK" + "1pEXujO9PlPwdWPOcfQf3Zw6wXIkcV+DrCnAe+3XP/OMfeRJ8a/LKemBAoGAf46M" + "9wqiO+WYH4+G6LS7fpCxTEdTx8kangQxzZIsQL/ykR568KI1V7WJji4sEpqwxxO/" + "J2sg03vcQob5spDLk1lenyYN8f2Eew+j8tbJebVlrzA6q+uKVE2e7X4J8IKM6ixs" + "doycIL7jV46U1ufYmBIbpKwbI0375bO2esP7BUUCgYBqmx7GJnyOapOYlR7wHhYL" + "rXk0QWwE7j3d4zQCHGOqzFqWxyIi1hsQYCwgOeobmd0r5kULRRptYBKvflEiboVq" + "RL3VeyR9ZIEDkbCUewwf2qn6EoOCfi7x6/36brhn8r3mWC9rvNiKB33iBJrkin1p" + "f0aUgyrlhk1aMnDDBFFb8A==" + "-----END PRIVATE KEY-----"; rsaPrivateKey = rsaPrivateKey.replace("-----BEGIN PRIVATE KEY-----", ""); rsaPrivateKey = rsaPrivateKey.replace("-----END PRIVATE KEY-----", ""); PKCS8EncodedKeySpec keySpec = new PKCS8EncodedKeySpec(Base64.getDecoder().decode(rsaPrivateKey)); KeyFactory kf = KeyFactory.getInstance("RSA"); PrivateKey privKey = kf.generatePrivate(keySpec); return privKey; } } ``` Required libraries ``` io.jsonwebtoken jjwt-api 0.11.2 io.jsonwebtoken jjwt-impl 0.11.2 runtime io.jsonwebtoken jjwt-jackson 0.11.2 runtime ``` To invoke this method you must surrond it with try catch, and you'll be ready to go example : ``` String bearer = null; try { bearer = JWTIO.createJwtSignedHMAC(); } catch (InvalidKeySpecException e) { e.printStackTrace(); } catch (NoSuchAlgorithmException e) { e.printStackTrace(); } ``` Upvotes: 1 <issue_comment>username_3: It is not working because of the line separators. You should also remove the \n by ``` privKeyPEM = privKeyPEM.replaceAll("\n", ""); ``` after ``` privKeyPEM = privKeyPEM.replace("-----END RSA PRIVATE KEY-----", ""); ``` Upvotes: 1
2018/03/16
944
3,103
<issue_start>username_0: I scaled a matrix based on its columns, like this: ``` scaler = MinMaxScaler(feature_range=(-1, 1)) data = np.array([[-1, 2], [-0.5, 6], [0, 10], [1, 18]]) scaler = scaler.fit(data) data_scaled = scaler.transform(data) ``` the `data_scaled` gave me the following: ``` array([[-1. , -1. ], [-0.5, -0.5], [ 0. , 0. ], [ 1. , 1. ]]) ``` Which is the desired output. However, I'm trying to inverse the scaling of the first column of this matrix, so I tried the following (the error is shown below each line of code): ``` scaler.inverse_transform(data_scaled[:,1].reshape(1,-1)) Traceback (most recent call last): File "c:\anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 2862, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "", line 1, in scaler.inverse\_transform(data\_scaled[:,1].reshape(1,-1)) File "c:\anaconda3\lib\site-packages\sklearn\preprocessing\data.py", line 385, in inverse\_transform X -= self.min\_ ValueError: operands could not be broadcast together with shapes (1,4) (2,) (1,4) ``` Also, I tried: ``` scaler.inverse_transform(data_scaled[:,1].reshape(-1,1)) Traceback (most recent call last): File "c:\anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 2862, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "", line 1, in scaler.inverse\_transform(data\_scaled[:,1].reshape(-1,1)) File "c:\anaconda3\lib\site-packages\sklearn\preprocessing\data.py", line 385, in inverse\_transform X -= self.min\_ ValueError: non-broadcastable output operand with shape (4,1) doesn't match the broadcast shape (4,2) ``` So, how to rescale the first column of that matrix?<issue_comment>username_1: `scaler` remembers that you passed it a 2D input with two columns, and works under the assumption that all subsequent data passed to it will have the same number of features/columns. If it's only the first column you want, you will still need to pass `inverse_transform` an input with the *same number of columns*. Take the first column from the result and discard the rest. ``` scaler.inverse_transform(data_scaled)[:, [0]] array([[-1. ], [-0.5], [ 0. ], [ 1. ]]) ``` This is somewhat wasteful, but is a limitation of the sklearn API. Upvotes: 4 <issue_comment>username_2: sklearn.preprocessing.MinMaxScaler has attributes like min\_ and scale\_ <https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html#sklearn.preprocessing.MinMaxScaler> ------------------------------------------------------------------------------------------------------------------------------ you could **transfer these attribute** of that particular column to a new empty minmaxscaler that would solve your problem. --- Ex: **for a [?,4] array** [![enter image description here](https://i.stack.imgur.com/pAieA.png)](https://i.stack.imgur.com/pAieA.png) **transfer of attributes** [![enter image description here](https://i.stack.imgur.com/evEQG.png)](https://i.stack.imgur.com/evEQG.png) Upvotes: 4
2018/03/16
2,545
8,299
<issue_start>username_0: I'm trying to display my output in a 4X4 format. The program receives name of a text file then get the values in the text file and store in the 2D array. Then later displays them like this ```none 7 2 4 5 5 2 6 4 2 2 5 5 9 2 4 5 ``` But the problem is, it doesn't display like that, I'm not sure if it's my loop or what. Any ideas. it runs fine with no errors but the numbers don't display right. Here's my code ``` int main () { int i = 0; int j = 0; int value = 0; int a[4][4]; char ch, file_name[100]; FILE *fp; printf("Enter file name?"); gets(file_name); fp = fopen("file.txt", "r"); if (fp == NULL) { perror("Error \n"); } else { // printf("\n"); while (!feof(fp) && fscanf (fp, "%d ", &value)) { a[i][j] = value; for ( i = 0; i < 4; i++ ) { for ( j = 0; j < 4; j++ ) { printf("%d ",a[i][j] ); } printf("\n"); } } } fclose(fp); return 0; } ```<issue_comment>username_1: Your code suffers from various problems since you are mixing the code that reads the input and code that writes the output in one `while` loop. Separate the code that reads the input and the code that creates the output. Ideally, put them into different functions. Declare the functions as: ``` void readData(FILE* in, int a[4][4]); void writeData(FILE* out, int a[4][4]); ``` Use them as: ``` int main() { int a[4][4]; // Your code to open the file. readData(in, a); writeData(stdout, a); } ``` Implement them as: ``` void readData(FILE* in, int a[4][4]) { for ( i = 0; i < 4; i++ ) { for ( j = 0; j < 4; j++ ) { if ( fscanf(in, "%d", &a[i][j]) != 1 ) { printf("Unable to read an array element.\n"); exit(0); } } } } void writeData(FILE* out, int a[4][4]) { for ( i = 0; i < 4; i++ ) { for ( j = 0; j < 4; j++ ) { fprintf(out, "%d ", a[i][j]); } fprintf(out, "\n"); } } ``` Upvotes: 1 <issue_comment>username_2: Well, let's take your problems apart a piece at a time. First, the salient part of your code regarding input and attempted output: ``` int i = 0; int j = 0; int value = 0; int a[4][4]; ... while (!feof(fp) && fscanf (fp, "%d ", &value)) { a[i][j] = value; for ( i = 0; i < 4; i++ ) { for ( j = 0; j < 4; j++ ) { printf("%d ",a[i][j] ); } printf("\n"); } } ``` Before we look at the `!feof` problem. let's look at the overall logic of your loop structure. When you enter your `while` loop, the values of `i = j = 0;`. Presuming your file is open and there is an integer to read, you will fill `value` with the first integer in the file and then assign that value to the first element of your array with: ``` a[i][j] = value; ``` Now, you have stored 1-integer at `a[0][0]`. (all other values in `a` are **uninitialized** and **indeterminate**) Inexplicably, you then attempt to output the entire array, ... uninitialized values and all. ``` for ( i = 0; i < 4; i++ ) { for ( j = 0; j < 4; j++ ) { printf("%d ",a[i][j] ); } printf("\n"); } ``` Attempting to access an uninitialized value invokes *Undefined Behavior* (your program is completely unreliable from the first attempt to read `a[0][1]` on). It could appear to run normally or it could `SegFault` or anything in between. You need to complete the read of *all* values into your array before you begin iterating over the indexes in your array outputting the values. But wait... there is more... If you did not SegFault, when you complete the `for` loops, what are the values of `i` & `j` now? Your loops don't complete until both `i` and `j` are `4`, so those are the values they have at the end of the first iteration of your `while` loop. Now let's go to the next iteration of your `while` loop. Presuming there are two integer values in the file you are reading, you read the second integer into `value`. *What happens next?* You one-up yourself. After already invoking undefined behavior 15 times by attempting to read 15 uninitialized values `a[0][1] -> a[3][3]`, you then attempt to write beyond the bounds of `a` with: ``` a[4][4] = value; /* remember what i & j are now? */ ``` You then hit your `for` loops..., again with `a[0][1]` holding the only validly initialized value and the cycle repeats. But wait..., there is more... After having read the last integer in your file, you arrive at the beginning of your `while` loop one last time: ``` while (!feof(fp) && fscanf (fp, "%d ", &value)) ``` You test `!feof(fp)` and no `EOF` has been set because your last read was a valid read of the last integer which completed with the last *digit* and did not trigger a stream error, so you proceed to `fscanf (fp, "%d ", &value)` which now returns `EOF` (e.g. `-1`), but since you simply test whether `fscanf(..)`, `-1` tests **TRUE** so off you go again through the entire body of your loop invoking undefined behavior at least 16 more times. Are you beginning to understand why the output may not have been what you wanted? You have already had a good answer on how to go about the read and print. I'll offer just an alternative that does not use any functions so it may help you follow the logic a bit better. The following code just reads from the filename given as the first argument, or from `stdin` by default if no argument is given. The code *validates* that the file pointer points to a file that is open for reading and then enters a `while` loop reading an integer at a time into `&array[rows][cols]` with `fscanf` and *validating* the **return** of `fscanf` is `1`. Anything else is considered failure. The remainder of the read loop just increments `cols` until is reaches `NCOLS` (`4` here - being the number of columns per-row), then it increments the row-count in `rows` and sets the column-index `cols` back to zero. A final *validation* is performed before output to insure `NROWS` of integers were read and that `cols` was set to zero during the final iteration of the read-loop. The contents of `array` is then output. The code is fairly straight-forward: ``` #include #define NROWS 4 #define NCOLS NROWS int main (int argc, char \*\*argv) { int array[NROWS][NCOLS] = {{0}}, /\* declare and intialize array \*/ rows = 0, cols = 0; /\* rows / columns index \*/ FILE \*fp = argc > 1 ? fopen (argv[1], "r") : stdin; if (!fp) { /\* validate file open for reading \*/ fprintf (stderr, "error: file open failed '%s'.\n", argv[1]); return 1; } /\* validate no more than NROWS of NCOLS integers are read \*/ while (rows < NROWS && fscanf (fp, "%d", &array[rows][cols]) == 1) { cols++; /\* increment column \*/ if (cols == NCOLS) { /\* if row full \*/ rows++; /\* increment rows \*/ cols = 0; /\* zero columns \*/ } } if (fp != stdin) fclose (fp); /\* close file if not stdin \*/ if (rows != NROWS || cols) { /\* validate rows = NROWS & cols = 0 \*/ fprintf (stderr, "error: read only %d rows and %d cols.\n", rows, cols); return 1; } for (int i = 0; i < NROWS; i++) { /\* output stored values \*/ for (int j = 0; j < NCOLS; j++) printf (" %2d", array[i][j]); putchar ('\n'); } return 0; } ``` (you have already been advised not to use `gets` which is subject to easy buffer overflow and so insecure it has been completely removed from the C11 library. If your professor continues to suggest its use -- go find a new professor) **Example Input File** ``` $ cat dat/arr2dfscanf.txt 7 2 4 5 5 2 6 4 2 2 5 5 9 2 4 5 ``` **Example Use/Output** ``` $ ./bin/arr2dfscanf ``` Also note, that by reading with `fscanf` in this manner, it doesn't matter what format your input is in -- as long as it is all integer values. This will produce the same output: ``` $ echo "7 2 4 5 5 2 6 4 2 2 5 5 9 2 4 5" | ./bin/arr2dfscanf ``` Look things over and let me know if you have further questions. Upvotes: 0
2018/03/16
501
1,661
<issue_start>username_0: I'm doing a join between two tables to push criteria values into a record description. The problem is some records do not have some/all criteria and the entire string fails: `Select Concat(Description,' ',C.CritieraNameA,': ',T.CriteriaValueA,' ',C.CriteriaNameB,': ',T.CriteriaValueB) From Records T Inner Join Company C On T.CompanyID=C.ID` so I end up with **Supermarket specializing in Dairy Products Hours: 8am-5pm Credit Cards: Yes** and **Gone with the Wind Running Time: Too long Format: DVD** This works fine until I hit a record where either * There is no CriteriaTypeA or CriteriaTypeB in the Company Table * There is no CriteriaValueA or CriteriaValueB in the Records Table Is there a way to do this select so when it doesn' find a CriteriaValue in Records: **Supermarket specializing in Meats Hours: Credit Cards: Yes** or a CriteriaName in Company: **Porsche Type: Sports Car** It does not simply return an empty result?<issue_comment>username_1: `IfNull` could be useful in this case - it will detect a null and replace it with a value you supply. ``` IFNULL(CriteriaTypeA , "not given") ``` Upvotes: 1 <issue_comment>username_2: This should do it: ``` Select Concat( Description, Coalesce(Concat(' ',C.CritieraNameA,': ',T.CriteriaValueA), ''), Coalesce(Concat(' ',C.CriteriaNameB,': ',T.CriteriaValueB), '') ) From Records T Inner Join Company C On T.CompanyID=C.ID ``` But as I wrote in the comment: Don't implement output logic in SQL. Upvotes: 0 <issue_comment>username_3: Concat\_WS to the rescue: `Concat_WS('',C.CriteriaTypeA,': ',T.CriteriaValue)` Upvotes: 1
2018/03/16
1,015
3,651
<issue_start>username_0: I have two tables temp\_number and temp\_port, I want to select only those number which are having only port name is 'ip sub' and want to exclude those numbers are having both port names or port name is 'local loop' ``` temp_number ----------------------------------- numberid | name ----------------------------------- 1 | abc 2 | def 3 | ghi ----------------------------------- temp_port ----------------------------------- portid | numberid | name ----------------------------------- 1 | 1 | local loop 2 | 1 | ip sub 3 | 2 | local loop 4 | 3 | ip sub ----------------------------------- CREATE TABLE temp_number ( numberid number(10), --pk name varchar2(50)); CREATE TABLE temp_port ( portid number(10), --pk numberid number(10), --fk name varchar2(50)); insert into temp_number values(1,'abc'); insert into temp_port values(1,1,'local loop'); insert into temp_port values(2,1,'ip sub'); insert into temp_number values(2,'def'); insert into temp_port values(3,2,'local loop'); insert into temp_number values(3,'ghi'); insert into temp_port values(4,3,'ip sub'); What I tried : select n.name, p.name from temp_number n, temp_port paving where n.numberid=p.numberid and p.name not in ('local loop'); actual result: ----------------------------------- name | Name ----------------------------------- abc | ip sub ghi | ip sub expected result: ----------------------------------- name | Name ----------------------------------- ghi | ip sub ```<issue_comment>username_1: You may want to use a filter NOT EXISTS: ``` SELECT DISTINCT t.name, p.name FROM temp_number t INNER JOIN temp_port p ON p.numberid = t.numberid WHERE NOT EXISTS (SELECT 1 FROM temp_port s WHERE s.numberid = t.numberid AND s.name = 'local loop') ``` This way, if for some number there's at least a 'local loop' port, it filters out that number Upvotes: 1 <issue_comment>username_2: Try this filtering ``` SELECT DISTINCT t.name, p.name FROM temp_number t, temp_ports p Where t.numberid=p.numberid AND t.numberid NOT IN (SELECT s.numberid FROM temp_port s WHERE s.name = 'local loop') ``` Upvotes: 2 [selected_answer]<issue_comment>username_3: Select those having 'ip sub' port name only and no other port names: ``` select n.* from temp_number n join (select numberid, sum(case when name = 'ip sub' then 1 else 0 end) ipCount, count(*) totalCount from temp_port group by numberid ) t on n.numberId = t.numberId where t.ipCount = totalCount and t.ipCount > 0 ``` Upvotes: 0 <issue_comment>username_4: You can get the numbers by just doing: ``` select numberid from temp_port group by numberid having min(name) = 'ip sub' and min(name) = max(name); ``` If you actually want the name, joining in the name is pretty easy: ``` select n.numberid, n.name from temp_number n join temp_port p on n.numberid = p.numberid group by n.numberid, n.name having min(p.name) = 'ip sub' and min(p.name) = max(p.name); ``` Upvotes: 0 <issue_comment>username_5: we can eliminate the not needed entries by using sub query as follows. ``` Select a.name, b.name From temp_number a left join temp_port b on b.numberid = a.numberid Where b.name = 'ip sub' and a.numberid not in (Select c.numberid from temp_port c where c.name <> 'ip sub') ``` Upvotes: 0
2018/03/16
970
3,501
<issue_start>username_0: So I have a list of viewmodels that i iterate through and one of the properties in the viewmodel object is a dictionary. In my customercontroller, in the details action I get all the viewmodels that correspond to the id from the asp-route and in the view i have a form where I present all the dictionary values so that you can modify them if you like. Afterwards you can submit the form. This is where I se that the list of viewmodels are "0". Why is this? this is my model: ``` public class CustomerViewModel { public CustomerViewModel() { EmployeeAndHours = new Dictionary(); } public string projectName { get; set; } public Dictionary EmployeeAndHours { get; set; } } ``` this is the get action: ``` // GET: Customers/Details/5 public IActionResult Details(int? id) { if (id == null) { return NotFound(); } var customers = _customerHelper.GetCustomerDetails(id); if (customers == null) { return NotFound(); } return View(customers); } ``` this is the post action: ``` [HttpPost] public IActionResult EditCustomerDetailViewModel(List customers) { //TODO return View("Details"); } ``` this is my view: ```html @model List @foreach (var customer in Model) { ##### @customer.projectName @foreach (var employee in customer.EmployeeAndHours) // This is the dictionary { --- } } ```<issue_comment>username_1: You are not referencing the model name from your POST action, try this: ``` @{int index = 0;} @foreach (var customer in Model) { ... @foreach (var employee in customer.EmployeeAndHours) { --- } ... index++; } ``` Upvotes: -1 <issue_comment>username_2: You cannot use a `foreach` loop to generate form controls for collection items and get correct 2-way model binding. You need to use a `for` loop or and `EditorTemplate` for typeof `CustomerViewModel` as explained in [Post an HTML Table to ADO.NET DataTable](https://stackoverflow.com/questions/30094047/post-an-html-table-to-ado-net-datatable/30094943#30094943). In addition, you should avoid binding to a `Dictionary` because you cannot use the strong typed `HtmlHelper` method or TagHelpers to give 2-way model binding. In order oo bind to your current model, your `name` attribute would need to be in the format `name="[#].EmployeeAndHours[Key]"` where `#` is the zero-based collection indexer. Instead, modify your view models to ``` public class CustomerViewModel { public string ProjectName { get; set; } public List EmployeeHours { get; set; } } public class EmployeeHoursViewModel { public string Employee { get; set; } public int Hours{ get; set; } } ``` And the view then becomes ``` @model List @for(int i = 0; i < Model.Count i++) { .... ##### @Model[i].ProjectName @Html.HiddenFor(m => m[i].ProjectName) // or @for(int j = 0; j < Model[i].EmployeeHours.Count; j++) { @Html.HiddenFor(m => m[i].EmployeeHours[j].Employee) @Html.LabelFor(m => m[i].EmployeeHours[j].Hours, Model[i].EmployeeHours[j].Employee) // or @Model[i].EmployeeHours[j].Employee @Html.TextBoxFor(m => m[i].EmployeeHours[j].Hours, new { type = "range", ... }) // or } } } ``` Note the above code assumes you want to post back the value of `ProjectName` (hence the hidden input), and that you want to display the employee name adjacent each 'Hours' input. Upvotes: 2 [selected_answer]
2018/03/16
423
1,397
<issue_start>username_0: I want to create a hotkey using a backslash key (`\`). Namely, I would like to assign `Media_Prev` to Windows button + backslash. I have tried both `LWin & \::Media_Prev` and `#\::Media_Prev`. However, these do not work: it just normally sends the backslash character. In AutoHotkey's key history, I do see that both `\` and `LWin` register when I press this key combination. Interestingly, something like `LWin & c::Media_Prev` or `#v::Media_Prev` does work well, just not with the backslash character.<issue_comment>username_1: This worked for me, using AHK version 1.1.13.01 ``` LWin & \:: run Notepad ``` you can also use scan codes - something like ``` SC15B & SC02B:: run Notepad ``` should have the same effect reference: [here](https://www.autohotkey.com/docs/KeyList.htm) Upvotes: 3 [selected_answer]<issue_comment>username_2: I successfully tested the following: ``` <#vk dc::Send {Media_Prev} ``` * `<` is Left key * `#` is Windows key * `VK DC` is virtual key number DC (you can find this code using AHK menu **View** > **Key History and Script Info**). With VK, the same key works regardless of active keyboard layout. **Note:** Don't do `LWin & key` or `Ctrl & key` etc... `&` has a bit different meaning. For the above purpose AHK provides prefixes as shown above, e.g. `^b` or `#F1` etc. See AHK help on *Remapping Keys and Buttons* Upvotes: 0
2018/03/16
976
3,280
<issue_start>username_0: ok so I understand this is a very basic JS logic but I am trying to replace any `document.write()` with `.innerHTML` and I tried it with the code below ``` function writeTimesTable(num) { for(let i = 1; i < 10; i++ ) { let writeString = i + " * " + num + " = "; writeString = writeString + (i * num); writeString = writeString + " "; document.write(writeString); } } function newTable() { for(let i = 1; i <= 10; i++ ) { let para = document.getElementById("paragraph"); para.innerHTML = writeTimesTable(i) } } ``` I have a `div` element with the ID of `paragraph` already. I keep getting `undefined` when I look at the `div#paragraph` and the rest of my code outputs under my `script` tag but not in the `div` element. What am I doing wrong?<issue_comment>username_1: Your function `writeTimesTable()` needs to return a value. Your `writeString` string, needs to be concatenated as well, you can do that with `+=` like seen below: ``` function writeTimesTable(num) { let writeString = ""; for(let i = 1; i < 10; i++ ) { writeString += i + " * " + num + " = "; writeString += writeString + (i * num); writeString += writeString + " "; } return writeString } ``` Using `para.innerHTML = writeTimesTable(i)` probably isn't intended, as it will just display the last loop, so you might also want to use `+=` here as well: ``` para.innerHTML += writeTimesTable(i) ``` Upvotes: 1 <issue_comment>username_2: You normally want to avoid `document.write` because it literally writes out to the document. I have also taken the liberty of doing the DOM creation off-page during the loop, and just add it to the actual DOM at the end. This means you aren't constantly re-drawing the page while you loop. This is better than changing your `innerHTML =` to `innerHTML +=`, which you would need to do if you wanted to avoid overwriting each previous iteration of the loop. ```js function writeTimesTable(num) { for(let i = 1; i < 10; i++ ) { let writeString = i + " * " + num + " = "; writeString = writeString + (i * num); writeString = writeString + " "; return writeString; } } function newTable() { const inner = document.createElement('div'); for(let i = 1; i <= 10; i++ ) { const item = document.createElement('div'); item.innerHTML = writeTimesTable(i); inner.appendChild(item); } let para = document.getElementById("paragraph"); para.appendChild(inner); } newTable(); ``` Upvotes: 0 <issue_comment>username_3: Your `newTable()` function with the 10 loops is useless. You're doing 10 times the same stuff over a single DOM element. Don't use `document.write`... I'd do it like: ```js function newTable( num ) { var HTML = ""; for (var i=1; i <= num; i++) { HTML += i +" * "+ num +" = "+ (i*num) +" "; } return HTML; // Return the concatenated HTML } document.getElementById("paragraph").innerHTML = newTable(10); ``` ```html asdasdasd ``` Or in a super uselessly cryptic **ES6 way**: ```js const newTable = n => Array(n).fill().map((_,i) => `${i+=1} * ${n} = ${i*n} ` ).join(''); document.getElementById("paragraph").innerHTML = newTable(10); ``` ```html asdasdasd ``` Upvotes: 0
2018/03/16
851
3,031
<issue_start>username_0: I'm trying to render an curved vertical list like this iOS component: <https://github.com/makotokw/CocoaWZYCircularTableView> That component (written in Obj-c) iterates the visible cells when laying them out, and sets the frame (i.e. indent) using asin. I know in React Native I can set the leftMargin style in the renderItem callback, but I can't figure out how to get the on-screen index of the item - all I have is the index into the source data. And also, at that point, I don't think I have access to the absolute position. Any ideas?<issue_comment>username_1: React-native's `FlatList` component has a prop called [`onLayout`](https://facebook.github.io/react-native/docs/view.html#onlayout). You can get the position of the component on screen with this prop. > > **onLayout** > > > Invoked on mount and layout changes with: > > > > ``` > {nativeEvent: { layout: {x, y, width, height}}} > > ``` > > This event is fired immediately once the layout has been calculated, > but the new layout may not yet be reflected on the screen at the time > the event is received, especially if a layout animation is in > progress. > > > Upvotes: 1 <issue_comment>username_2: The function you are looking for is **onViewableItemsChanged**. You can use it with **viewabilityConfig** which provides us with **minimumViewTime,viewAreaCoveragePercentThreshold,waitForInteraction** which can be set accordingly ``` const VIEWABILITY_CONFIG = { minimumViewTime: 3000, viewAreaCoveragePercentThreshold: 100, waitForInteraction: true, }; _onViewableItemsChanged = (info: { changed: Array<{ key: string, isViewable: boolean, item: any, index: ?number, section?: any, }> } ){ //here you can have the index which is visible to you } ``` Upvotes: 2 <issue_comment>username_3: Thanks for both answers. What I have ended up doing is deriving the visible items using the scroll offset of the list. This is simple because the list items all have the same height. I do this in the onScroll handler, and at that point I calculate the horizontal offset for each item (and I use leftMargin / rightMargin to render this). It's not perfect, but it does give me an elliptical list. ``` _handleScroll = (event) => { const topItemIndex = Math.floor(event.nativeEvent.contentOffset.y / LIST_ITEM_HEIGHT); const topItemSpare = LIST_ITEM_HEIGHT-(event.nativeEvent.contentOffset.y % LIST_ITEM_HEIGHT); const positionFromEllipseTop = (forIndex-topItemIndex)*LIST_ITEM_HEIGHT+topItemSpare; const positionFromOrigin = Math.floor(Math.abs(yRadius - positionFromEllipseTop)); const angle = Math.asin(positionFromOrigin / yRadius); if (orientation === 'Left') { marginLeft = 0; marginRight = ((xRadius * Math.cos(angle)))-LIST_ITEM_HEIGHT; alignSelf = 'flex-end'; } else if (orientation === 'Right') { marginLeft = (xRadius * Math.cos(angle))-LIST_ITEM_HEIGHT; marginRight = 0; alignSelf = 'flex-start'; } } ``` Upvotes: 2
2018/03/16
1,444
5,011
<issue_start>username_0: I am using Keras to perform landmark detection - specifically locating parts of the body on a picture of a human. I have gathered around 2,000 training samples and am using rmsprop w/ mse loss function. After training my CNN, I am left with `loss: 3.1597e-04 - acc: 1.0000 - val_loss: 0.0032 - val_acc: 1.0000` I figured this would mean my model would perform well on the test data, however, instead the predicted points are way off from the labeled points. Any ideas or help would be greatly appreciated! ```py IMG_SIZE = 96 NUM_KEYPOINTS = 15 NUM_EPOCHS = 50 NUM_CHANNELS = 1 TESTING = True def load(test=False): # load data from CSV file df = pd.read_csv(fname) # convert Image to numpy arrays df['Image'] = df['Image'].apply(lambda im: np.fromstring(im, sep=' ')) df = df.dropna() # drop rows with missing values X = np.vstack(df['Image'].values) / 255. # scale pixel values to [0, 1] X = X.reshape(X.shape[0], IMG_SIZE, IMG_SIZE, NUM_CHANNELS) X = X.astype(np.float32) y = df[df.columns[:-1]].values y = (y - (IMG_SIZE / 2)) / (IMG_SIZE / 2) # scale target coordinates to [-1, 1] X, y = shuffle(X, y, random_state=42) # shuffle train data y = y.astype(np.float32) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=42) return X_train, X_test, y_train, y_test def build_model(): # construct the neural network model = Sequential() model.add(Conv2D(16, (3, 3), activation='relu', input_shape=(IMG_SIZE, IMG_SIZE, NUM_CHANNELS))) model.add(MaxPooling2D(2, 2)) model.add(Conv2D(32, (3, 3), activation='relu')) model.add(MaxPooling2D(2, 2)) model.add(Conv2D(64, (3, 3), activation='relu')) model.add(MaxPooling2D(2, 2)) model.add(Flatten()) model.add(Dropout(0.5)) model.add(Dense(500, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(NUM_KEYPOINTS * 2)) return model if __name__ == '__main__': X_train, X_test, y_train, y_test = load(test=TESTING) model = build_model() sgd = optimizers.SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True) model.compile(optimizer=sgd, loss='mse', metrics=['accuracy']) hist = model.fit(X_train, y_train, epochs=NUM_EPOCHS, verbose=1, validation_split=0.2) # save the model model.save_weights("/output/model_weights.h5") histFile = open("/output/training_history", "wb") pickle.dump(hist.history, histFile) ```<issue_comment>username_1: It is impossible to tell from your question, but I will venture a guess here by some implications of your data split. Typically, when one splits one's data into more than two sets, one is using all but one of them to train on some parameter or another. For example, the first split is used to choose the model weights, the second split to choose the model architecture, etc. Presumably you are training *something* with your 'validation' set, otherwise you wouldn't have it. Thus, the problem is almost certainly overfitting. The way that you detect overfitting, usually, is the difference in the accuracy of your model on data used to train your model (which is usually everything except for one single split) which you are calling your 'training' and 'validation' splits, and the accuracy of a split which your model has not touched, which you are calling your 'test' split. So, per your question-comment "I assume if the validation accuracy is that high then there is no overfitting, right?". No. If the difference between the accuracy of your model on any data that you've used to train anything at all is higher than the accuracy of your model on data that your model has never touched in any way shape form or fashion, then you've overfit. Which seems to be the case with you. OTOH, it may be the case that you've simply not shuffled your data. It's impossible to tell without having a look-see at the training/testing pipeline. Upvotes: 0 <issue_comment>username_2: According to this question [How does keras define "accuracy" and "loss"?](https://stackoverflow.com/questions/41531695/how-does-keras-define-accuracy-and-loss) your "accuracy" is defined as categorical accuracy which makes absolutely no sense for your problem. After training you are left with a 10x difference between your training loss and validation loss which would suggest overfitting (hard to say for sure without a graph and some examples). To start fixing it: * Use a metric that makes sense in your context and you understand what it does and how it's computed. * Take random examples where the metric is very good and where is very bad and manually validate that that is really the case (otherwise you need a different metric). In your case I would imagine a metric based on the distance between the desired location and the predicted ones. This is not a default thing and you would have to implement it yourself. Always be suspicious if the model says it's perfect. Upvotes: 2 [selected_answer]
2018/03/16
1,283
3,651
<issue_start>username_0: Suppose I have several lists of colors, where each list correspond to a given person. I want to create a master list, so I know which person has has what colors in their list. Here's example data: ``` Sarah <- c("Red", "Blue", "Yellow", "Green", "Pink") Ben <- c("Black", "White", "Silver", "Pink", "Red", "Purple") Jenny <- c("Green", "Orange", "Gold") people <- list(Sarah, Ben, Jenny) names(people) <- c("Sarah", "Ben", "Jenny") allcolors <- c( Sarah, Ben, Jenny) colorSet <- data.frame(colors = allcolors) ``` I want a master sheet, where reach row corresponds to a color and each column corresponds to a person. If the person has a color in their list, then that cell would be TRUE, if they do not have a color in their list then it would be FALSE. This is what I tried to far, but it hasn't worked. ``` for (i in 1:length(people)) { sub_people <- people[[i]] sub_people_name <- names(people[i]) colorSet$x <- ifelse(which(sub_people %in% colorSet$colors), TRUE, FALSE) names(colorSet)[names(colorSet) == x] <- sub_people_name } ``` This is the error I get: Error in `$<-.data.frame`(`*tmp*`, "x", value = c(TRUE, TRUE, TRUE, TRUE, : replacement has 5 rows, data has 14 Any help would be greatly appreciated!<issue_comment>username_1: This should work with base R: ``` colorSet$Sarah <- colorSet$colors %in% Sarah colorSet$Ben <- colorSet$colors %in% Ben colorSet$Jenny <- colorSet$colors %in% Jenny ``` Where `colorSet` returns: > > > ``` > colors Sarah Ben Jenny > 1 Red TRUE TRUE FALSE > 2 Blue TRUE FALSE FALSE > 3 Yellow TRUE FALSE FALSE > 4 Green TRUE FALSE TRUE > 5 Pink TRUE TRUE FALSE > 6 Black FALSE TRUE FALSE > 7 White FALSE TRUE FALSE > 8 Silver FALSE TRUE FALSE > 9 Pink TRUE TRUE FALSE > 10 Red TRUE TRUE FALSE > 11 Purple FALSE TRUE FALSE > 12 Green TRUE FALSE TRUE > 13 Orange FALSE FALSE TRUE > 14 Gold FALSE FALSE TRUE > > ``` > > This should also work: ``` l <- rbind.data.frame(lapply(people, function(x) colorSet$colors %in% x)) l$colors <- colorSet$colors ``` With `purrr` we could do: ``` purrr::map_df(people, function(x) colorSet$colors %in% x) # Alternatively, if you prefer formula syntax purrr::map_df(people, ~ colorSet$colors %in% .) ``` Returns: > > > ``` > # A tibble: 14 x 3 > Sarah Ben Jenny > > 1 TRUE TRUE FALSE > 2 TRUE FALSE FALSE > 3 TRUE FALSE FALSE > 4 TRUE FALSE TRUE > 5 TRUE TRUE FALSE > 6 FALSE TRUE FALSE > 7 FALSE TRUE FALSE > 8 FALSE TRUE FALSE > 9 TRUE TRUE FALSE > 10 TRUE TRUE FALSE > 11 FALSE TRUE FALSE > 12 TRUE FALSE TRUE > 13 FALSE FALSE TRUE > 14 FALSE FALSE TRUE > > ``` > > Upvotes: 3 [selected_answer]<issue_comment>username_2: ``` Sarah <- c("Red", "Blue", "Yellow", "Green", "Pink") Ben <- c("Black", "White", "Silver", "Pink", "Red", "Purple") Jenny <- c("Green", "Orange", "Gold") m_color <-unique(c(Sarah,Ben,Jenny)) TF <-data.frame(color = m_color) TF$Sarah <- m_color%in%Sarah TF$Ben <- m_color%in%Ben TF$Jenny <- m_color%in%Jenny ``` Upvotes: 0 <issue_comment>username_3: Following uses `sapply`. This way one doesn't have to names of people repeatedly. ``` all_colours <- unique(allcolors) tab <- sapply(people, function(x) all_colours %in% x) rownames(tab) <- all_colours tab # Sarah <NAME> # Red TRUE TRUE FALSE # Blue TRUE FALSE FALSE # Yellow TRUE FALSE FALSE # Green TRUE FALSE TRUE # Pink TRUE TRUE FALSE # Black FALSE TRUE FALSE # White FALSE TRUE FALSE # Silver FALSE TRUE FALSE # Purple FALSE TRUE FALSE # Orange FALSE FALSE TRUE # Gold FALSE FALSE TRUE ``` Upvotes: 0
2018/03/16
1,032
3,574
<issue_start>username_0: I am trying to create a 'secret' guestList to practice some closures and binding in JS. I am currently stuck because I need to use binding so the value of `i` updates after every iteration but I am really new to biding and I am having trouble wrapping my head around this... how do I call my variable `code` from the closure? how do I correctly bind the guesName to my checkCode function? :/ Here my code : ``` function guestListFns(guestList, secretCode){ var topSecretList = []; function codeChecker(code) { if (code === secretCode) { return guestName; } else { return "Secret-Code: Invalid"; } }; for(var i = 0 ; i < guestList.length; i += 1){ var guestName = guestList[i]; topSecretList.push(codeChecker.call(this.guestName, code)); } console.log(topSecretList); return topSecretList; } ``` my testing values : ``` var guestListFns = guestListFns(["Gabriel", "Ben", "Dan", "Griffin", "Cang", "Kate", "Chris"], 512); var guest = guestListFns[1](512); console.log(guest); ``` my return value so far: ``` "code is not defined" ``` Also, I have already figured out how to implement this function simply using map. But what I meant with this exercise is to practice binding so I can understand the concept. Thanks!<issue_comment>username_1: You dont wanna `call`, but you want to `bind`: ``` codeChecker.bind({ guestName }) ``` So now inside codeChecker you can access ``` this.guestName ``` --- Actually you are overcomplicating things: ``` const guestListFns = (arr, secret) => arr.map(el => code => code === secret ? el : "nope"); ``` Upvotes: 1 <issue_comment>username_2: If I understand what you are trying, I think `call()` is the wrong way to go about this. You want an array of partial functions where each function already has the name. You can use `bind()` for this. `call()` actually invokes the function, which isn't what you really want here. `bind()` returns a new function and allows you to set `this` and/or some of the arguments: ```js function guestListFns(guestList, secretCode){ var topSecretList = []; function codeChecker(guestName, code) { if (code === secretCode) { return guestName; } else { return "Secret-Code: Invalid"; } }; for(var i = 0 ; i < guestList.length; i += 1){ var guestName = guestList[i]; topSecretList.push(codeChecker.bind(null, guestName)); } return topSecretList; } var guestListFns = guestListFns(["Gabriel", "Ben", "Dan", "Griffin", "Cang", "Kate", "Chris"], 512); console.log(guestListFns[1](512)); // ben console.log(guestListFns[2](512)); // dan console.log(guestListFns[1](112)); // bad code ``` Upvotes: 1 <issue_comment>username_3: Might this be what you're looking for? ```js function guestListFns(guestList, secretCode){ var topSecretList = []; function codeChecker(code) { if (code === secretCode) { return this.guestName; } else { return "Secret-Code: Invalid"; } }; for(var i = 0 ; i < guestList.length; i += 1){ var guestName = guestList[i]; topSecretList.push(codeChecker.bind({guestName:guestName})); } console.log(topSecretList); return topSecretList; } var gfn = guestListFns(["Gabriel", "Ben", "Dan", "Griffin", "Cang", "Kate", "Chris"], 512); gfn.forEach(f => console.log(f(512))); ``` Upvotes: 1 [selected_answer]
2018/03/16
838
2,947
<issue_start>username_0: I try to create registration in Angular 5. Everything seems to be working properly (accounts are added to the database) however I always get error `TypeError: undefined is not an object (evaluating 'data.Errors[0]')` Any ideas what is wrong? It seems to me that there is a problem with `data.Succeeded` it looks like `Succeeded` was undefined and condition always achieves `else`. ``` OnSubmit(form: NgForm) { this.userService.registerUser(form.value) .subscribe((data: any) => { if (data.Succeeded == true) { this.resetForm(form); this.toastr.success('User registration successful'); } else this.toastr.error(data.Errors[0]); }); } ```<issue_comment>username_1: You dont wanna `call`, but you want to `bind`: ``` codeChecker.bind({ guestName }) ``` So now inside codeChecker you can access ``` this.guestName ``` --- Actually you are overcomplicating things: ``` const guestListFns = (arr, secret) => arr.map(el => code => code === secret ? el : "nope"); ``` Upvotes: 1 <issue_comment>username_2: If I understand what you are trying, I think `call()` is the wrong way to go about this. You want an array of partial functions where each function already has the name. You can use `bind()` for this. `call()` actually invokes the function, which isn't what you really want here. `bind()` returns a new function and allows you to set `this` and/or some of the arguments: ```js function guestListFns(guestList, secretCode){ var topSecretList = []; function codeChecker(guestName, code) { if (code === secretCode) { return guestName; } else { return "Secret-Code: Invalid"; } }; for(var i = 0 ; i < guestList.length; i += 1){ var guestName = guestList[i]; topSecretList.push(codeChecker.bind(null, guestName)); } return topSecretList; } var guestListFns = guestListFns(["Gabriel", "Ben", "Dan", "Griffin", "Cang", "Kate", "Chris"], 512); console.log(guestListFns[1](512)); // ben console.log(guestListFns[2](512)); // dan console.log(guestListFns[1](112)); // bad code ``` Upvotes: 1 <issue_comment>username_3: Might this be what you're looking for? ```js function guestListFns(guestList, secretCode){ var topSecretList = []; function codeChecker(code) { if (code === secretCode) { return this.guestName; } else { return "Secret-Code: Invalid"; } }; for(var i = 0 ; i < guestList.length; i += 1){ var guestName = guestList[i]; topSecretList.push(codeChecker.bind({guestName:guestName})); } console.log(topSecretList); return topSecretList; } var gfn = guestListFns(["Gabriel", "Ben", "Dan", "Griffin", "Cang", "Kate", "Chris"], 512); gfn.forEach(f => console.log(f(512))); ``` Upvotes: 1 [selected_answer]
2018/03/16
702
2,154
<issue_start>username_0: I have created a table from my MySQL Table. Each row has five columns and the fifth column contains a hyperlink to a new php file. The new php file requires the variables contained in columns 3 and 4. My code for populating the table cells is as follows; ``` | | | | | | | | --- | --- | --- | --- | --- | --- | | | Date | First Tee Time | Opponents | Venue | | | --- | --- | --- | --- | --- | | | php while($row = mysqli\_fetch\_array($result)):;?| php echo date('jS F, Y', strtotime($row[0]));? | @ php echo substr($row[1],0,5);? | php echo $row[2];? | php echo $row[3];? | [Team Sheet/Results](edit.php?Opponents=<?php echo $row[2];?> & Venue=<?php echo $row[3];?>) | php endwhile;? | ``` The line ``` [Team Sheet/Results](edit.php?Opponents=<?php echo $row[2];?> & Venue=<?php echo $row[3];?>) | ``` call the php file edit.php and passes two variables "Opponents" and "Venue". The edit.php file is as follows; ``` php session_start(); $uOpponentName = $_GET['Opponents']; $_SESSION["Opponents"] = $uOpponentName; $uVenue = $_GET['Venue']; $_SESSION["Venue"] = $uVenue; header('Location: Web_match_Sheet.php'); ? ``` All works fine, apart from the fact that $\_GET() truncates the Opponents variable. i.e. Bristol City becomes Bristol I have tried a number of changes but all failed. Can anyone point me to a solution. Bit of a novice at this!<issue_comment>username_1: Try doing a `urlencode()` on the value before putting it in the link address. `urlencode($row[2]);` (Or whichever is causing the issue.) Upvotes: 2 <issue_comment>username_2: Your query string needs some work. There should be no spaces that aren't part of a value, and any spaces in a value should be properly encoded. (They should appear in the URL as `+`.) PHP has a built-in function [`http_build_query`](http://php.net/manual/en/function.http-build-query.php) that can help you produce a proper query string like this. ``` $query = http_build_query(['Opponents' => $row[2], 'Venue' => $row[3]]); ``` Then you can append that query string to your URL. ``` [test](edit.php?<?= $query ?>) ``` Upvotes: 2 [selected_answer]
2018/03/16
6,099
8,487
<issue_start>username_0: I have a very large dataset that can be described by the following `R` code: ``` set.seed(1) data <- data.frame(id = rep(c(rep(1,5), rep(2,5)),2), h = rep(1:2,10), d = c(rep(1,10), rep(2,10)), t = rep(c(sample(c(1,2,3), 5, replace = T), sample(c(1,2,3), 5, replace = T)),2), q = runif(20), p = runif(20), b = runif(20), w = rep(c(rep(.1,2), rep(.2,2)),5)) ``` where `id`, is the subject id and `h` and `d` are hours and days. Each `id` has multiple `t`. For each type of `t` (which can be 1, 2, or 3), indexed by `t_i`, in each `h` and `d`, I need to compute the summation over the `t = t_i` of `q_i * pnorm((p_i - b_i)/ w_i)`. The output should ideally be an additional column to the `data.frame` `data`. What is the fastest way to compute this? My dataset is very large, and I'm afraid that a `for loop` or `sapply` will take forever. I was thinking of using the `aggregate` function but I am not sure it works with expressions other than `mean` or `sum`. ---- FIXED A TYPO IN THE DGP ---- Example: Given the data (`set.seed(1)`) The last column give the output while `Sum` gives the multiplication `q_i * pnorm((p_i - b_i)/ w_i)` by row. Rows 1 and 4 have same conditioning variables and thus have the same value in the `result` column because this would be equal to `q_1 * pnorm((p_1 - b_1)/ w_1) +`q\_5 \* pnorm((p\_5 - b\_5)/ w\_5)``. I'M SORRY IF I WAS NOT CLEAR. ``` id h d t q p b w Sum result 1.1 1 1 1 1 0.20597457 0.4820801 0.47761962 0.1 1.066513e-01 8.764928e-01 2.2 1 2 1 2 0.17655675 0.5995658 0.86120948 0.1 7.843788e-04 7.843788e-04 2.3 1 1 1 2 0.68702285 0.4935413 0.43809711 0.2 4.185307e-01 4.185307e-01 3.4 1 2 1 3 0.38410372 0.1862176 0.24479728 0.2 1.478031e-01 1.478031e-01 1.5 1 1 1 1 0.76984142 0.8273733 0.07067905 0.1 7.698414e-01 8.764928e-01 3.6 2 2 1 3 0.49769924 0.6684667 0.09946616 0.1 4.976992e-01 4.976992e-01 3.7 2 1 1 3 0.71761851 0.7942399 0.31627171 0.2 7.115705e-01 7.115705e-01 2.8 2 2 1 2 0.99190609 0.1079436 0.51863426 0.2 1.985233e-02 1.985233e-02 2.9 2 1 1 2 0.38003518 0.7237109 0.66200508 0.1 2.779585e-01 2.779585e-01 1.10 2 2 1 1 0.77744522 0.4112744 0.40683019 0.1 4.025021e-01 4.025021e-01 1.11 1 1 2 1 0.93470523 0.8209463 0.91287592 0.2 3.018017e-01 3.745763e-01 2.12 1 2 2 2 0.21214252 0.6470602 0.29360337 0.2 2.039559e-01 2.039559e-01 2.13 1 1 2 2 0.65167377 0.7829328 0.45906573 0.1 6.512825e-01 6.512825e-01 3.14 1 2 2 3 0.12555510 0.5530363 0.33239467 0.1 1.238378e-01 1.238378e-01 1.15 1 1 2 1 0.26722067 0.5297196 0.65087047 0.2 7.277459e-02 3.745763e-01 3.16 2 2 2 3 0.38611409 0.7893562 0.25801678 0.2 3.845907e-01 3.845907e-01 3.17 2 1 2 3 0.01339033 0.0233312 0.47854525 0.1 3.555325e-08 3.555325e-08 2.18 2 2 2 2 0.38238796 0.4772301 0.76631067 0.1 7.346728e-04 7.346728e-04 2.19 2 1 2 2 0.86969085 0.7323137 0.08424691 0.2 8.691717e-01 8.691717e-01 1.20 2 2 2 1 0.34034900 0.6927316 0.87532133 0.2 6.147884e-02 6.147884e-02 ```<issue_comment>username_1: Here's an R base approach using `ave`. I am not sure if this solves your problem, but here's an attempt for you to figure out how to write the expression: ``` data$Aggregated.Sum <- ave(data[, c("q", "p", "b", "w")], data[,c("id", "h", "d", "t")], FUN=function(x){ sum(x$q * pnorm((x$p - x$b)/ x$w)) })[, 1] ``` Giving the following output: ``` id h d t q p b w Aggregated.Sum 1 1 1 1 1 0.20597457 0.4820801 0.47761962 0.1 0.87649276750008 2 1 2 1 2 0.17655675 0.5995658 0.86120948 0.1 0.00078437884830 3 1 1 1 2 0.68702285 0.4935413 0.43809711 0.2 0.41853074006211 4 1 2 1 3 0.38410372 0.1862176 0.24479728 0.2 0.14780307785185 5 1 1 1 1 0.76984142 0.8273733 0.07067905 0.1 0.87649276750008 6 2 2 1 3 0.49769924 0.6684667 0.09946616 0.1 0.49769923892396 7 2 1 1 3 0.71761851 0.7942399 0.31627171 0.2 0.71157053468214 8 2 2 1 2 0.99190609 0.1079436 0.51863426 0.2 0.01985232872822 9 2 1 1 2 0.38003518 0.7237109 0.66200508 0.1 0.27795848822967 10 2 2 1 1 0.77744522 0.4112744 0.40683019 0.1 0.40250214883360 11 1 1 2 1 0.93470523 0.8209463 0.91287592 0.2 0.37457632092225 12 1 2 2 2 0.21214252 0.6470602 0.29360337 0.2 0.20395587133630 13 1 1 2 2 0.65167377 0.7829328 0.45906573 0.1 0.65128247418102 14 1 2 2 3 0.12555510 0.5530363 0.33239467 0.1 0.12383782495223 15 1 1 2 1 0.26722067 0.5297196 0.65087047 0.2 0.37457632092225 16 2 2 2 3 0.38611409 0.7893562 0.25801678 0.2 0.38459067414776 17 2 1 2 3 0.01339033 0.0233312 0.47854525 0.1 0.00000003555325 18 2 2 2 2 0.38238796 0.4772301 0.76631067 0.1 0.00073467275547 19 2 1 2 2 0.86969085 0.7323137 0.08424691 0.2 0.86917168501551 20 2 2 2 1 0.34034900 0.6927316 0.87532133 0.2 0.06147884477362 ``` Upvotes: 1 <issue_comment>username_2: I am not sure if this is what you expected ``` data <- split(data, data$id) data <- lapply(data, function(i) {split(i,i$t)}) data <- unlist(data,recursive=FALSE) data1 <- lapply(data, function(j) { res <- j[,c("q","p","b","w")] j$result <- apply(res,1,function(i) i["q"] *pnorm( (i["p"] - i["b"]))/ i["w"] ) j }) data1 <- do.call(rbind, data1) ``` Upvotes: 1 <issue_comment>username_3: We `group_by(t, id, h, d)` then calculate the `Sum` for each row then finally calculate the total sum `result` for rows that have the same `t, id, h, d` ```r library(tidyverse) set.seed(1) options(scipen = 999) dat <- data.frame(id = rep(c(rep(1,5), rep(2,5)),2), h = rep(1:2,5), d = c(rep(1,10), rep(2,10)), t = rep(c(sample(c(1,2,3), 5, replace = T), sample(c(1,2,3), 5, replace = T)),2), q = runif(20), p = runif(20), b = runif(20), w = rep(c(rep(.1,2), rep(.2,2)),5)) dat #> id h d t q p b w #> 1 1 1 1 1 0.20597457 0.4820801 0.47761962 0.1 #> 2 1 2 1 2 0.17655675 0.5995658 0.86120948 0.1 #> 3 1 1 1 2 0.68702285 0.4935413 0.43809711 0.2 #> 4 1 2 1 3 0.38410372 0.1862176 0.24479728 0.2 #> 5 1 1 1 1 0.76984142 0.8273733 0.07067905 0.1 #> 6 2 2 1 3 0.49769924 0.6684667 0.09946616 0.1 #> 7 2 1 1 3 0.71761851 0.7942399 0.31627171 0.2 #> 8 2 2 1 2 0.99190609 0.1079436 0.51863426 0.2 #> 9 2 1 1 2 0.38003518 0.7237109 0.66200508 0.1 #> 10 2 2 1 1 0.77744522 0.4112744 0.40683019 0.1 #> 11 1 1 2 1 0.93470523 0.8209463 0.91287592 0.2 #> 12 1 2 2 2 0.21214252 0.6470602 0.29360337 0.2 #> 13 1 1 2 2 0.65167377 0.7829328 0.45906573 0.1 #> 14 1 2 2 3 0.12555510 0.5530363 0.33239467 0.1 #> 15 1 1 2 1 0.26722067 0.5297196 0.65087047 0.2 #> 16 2 2 2 3 0.38611409 0.7893562 0.25801678 0.2 #> 17 2 1 2 3 0.01339033 0.0233312 0.47854525 0.1 #> 18 2 2 2 2 0.38238796 0.4772301 0.76631067 0.1 #> 19 2 1 2 2 0.86969085 0.7323137 0.08424691 0.2 #> 20 2 2 2 1 0.34034900 0.6927316 0.87532133 0.2 dat %>% group_by(t, id, h, d) %>% mutate(Sum = q * pnorm((p - b)/w)) %>% mutate(result = sum(Sum)) #> # A tibble: 20 x 10 #> # Groups: t, id, h, d [18] #> id h d t q p b w Sum result #> #> 1 1. 1 1. 1. 0.206 0.482 0.478 0.100 0.107 8.76e-1 #> 2 1. 2 1. 2. 0.177 0.600 0.861 0.100 0.000784 7.84e-4 #> 3 1. 1 1. 2. 0.687 0.494 0.438 0.200 0.419 4.19e-1 #> 4 1. 2 1. 3. 0.384 0.186 0.245 0.200 0.148 1.48e-1 #> 5 1. 1 1. 1. 0.770 0.827 0.0707 0.100 0.770 8.76e-1 #> 6 2. 2 1. 3. 0.498 0.668 0.0995 0.100 0.498 4.98e-1 #> 7 2. 1 1. 3. 0.718 0.794 0.316 0.200 0.712 7.12e-1 #> 8 2. 2 1. 2. 0.992 0.108 0.519 0.200 0.0199 1.99e-2 #> 9 2. 1 1. 2. 0.380 0.724 0.662 0.100 0.278 2.78e-1 #> 10 2. 2 1. 1. 0.777 0.411 0.407 0.100 0.403 4.03e-1 #> 11 1. 1 2. 1. 0.935 0.821 0.913 0.200 0.302 3.75e-1 #> 12 1. 2 2. 2. 0.212 0.647 0.294 0.200 0.204 2.04e-1 #> 13 1. 1 2. 2. 0.652 0.783 0.459 0.100 0.651 6.51e-1 #> 14 1. 2 2. 3. 0.126 0.553 0.332 0.100 0.124 1.24e-1 #> 15 1. 1 2. 1. 0.267 0.530 0.651 0.200 0.0728 3.75e-1 #> 16 2. 2 2. 3. 0.386 0.789 0.258 0.200 0.385 3.85e-1 #> 17 2. 1 2. 3. 0.0134 0.0233 0.479 0.100 0.0000000356 3.56e-8 #> 18 2. 2 2. 2. 0.382 0.477 0.766 0.100 0.000735 7.35e-4 #> 19 2. 1 2. 2. 0.870 0.732 0.0842 0.200 0.869 8.69e-1 #> 20 2. 2 2. 1. 0.340 0.693 0.875 0.200 0.0615 6.15e-2 ``` Created on 2018-03-16 by the [reprex package](http://reprex.tidyverse.org) (v0.2.0). Upvotes: 3 [selected_answer]
2018/03/16
1,188
4,022
<issue_start>username_0: I am fairly new to power shell scripting. I am trying to write a script that will update the BIOS of several hundred Dell computers. But I am having an issue with these two errors: > > > ``` > Get-WmiObject : Invalid namespace "root\DCIM\SYSMAN" > At C:\Users\xxx\Desktop\Bios Updates Test.ps1:4 char:17 > + ... SVersion = (Get-WmiObject -Namespace root\DCIM\SYSMAN -Class DCIM_BIO ... > + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > + CategoryInfo : InvalidArgument: (:) [Get-WmiObject], ManagementException > + FullyQualifiedErrorId : GetWMIManagementException,Microsoft.PowerShell.Commands.GetWmiObjectCommand > > ``` > > and > > > ``` > You cannot call a method on a null-valued expression. > At C:\Users\xxx\Desktop\Bios Updates Test.ps1:12 char:5 > + if($BIOSVersion.CompareTo($BIOSUpdateFileVersion) -eq 0) > + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > + CategoryInfo : InvalidOperation: (:) [], RuntimeException > + FullyQualifiedErrorId : InvokeMethodOnNull > > ``` > > My code is: ``` $ScriptFolder = Split-Path -Parent $MyInvocation.MyCommand.Definition $Model = $((Get-WmiObject -Class Win32_ComputerSystem).Model).Trim() $BIOSVersion = (Get-WmiObject -Namespace root\DCIM\SYSMAN -Class DCIM_BIOSElement).Version if(Test-Path -Path $ScriptFolder\$Model) { $BIOSUpdateFile = Get-ChildItem -Path $ScriptFolder\$Model $BIOSUpdateFileVersion = $BIOSUpdateFile.ToString() -replace ($BIOSUpdateFile.Extension,"") $BIOSUpdateFileVersion = $BIOSUpdateFileVersion.Substring($BIOSUpdateFileVersion.Length -5) if($BIOSVersion.CompareTo($BIOSUpdateFileVersion) -eq 0) { Write-Output "BIOS Version is up to date" } else { try{ Write-Output "BIOS Update Needed. Attempting BIOS Flash Operation..." #Invoke-Expression $ScriptFolder\$Model\$BIOSUpdateFile " /quiet" $objStartInfo = New-Object System.Diagnostics.ProcessStartInfo $objStartInfo.FileName = "$ScriptFolder\$Model\$BIOSUpdateFile" $objStartInfo.Arguments = "-noreboot -nopause -forceit" $objStartInfo.CreateNoWindow = $true [System.Diagnostics.Process]::Start($objStartInfo) | Out-Null } catch {[Exception] Write-Output "Failed: $_" } } Write-Output "End Dell BIOS Update Operation. Completed" } else { Write-Output "Model Not Supported" } ```<issue_comment>username_1: > > Why am I getting an error on null-valued expression? > > > The error message says `You cannot call a method on a null-valued expression.` The method on the null-valued expression is `$BIOSVersion.CompareTo($BIOSUpdateFileVersion)`. Thas is, because `$BIOSVersion` is an unassigned variable (`null`). Why is that? If you try to assign the variable, `Get-WmiObject` throws an error, because it couldn't find the namespace. So, in order to fix the second error, you have to assign the variable properly, hence find a proper namespace and/or WMI class. Upvotes: 0 <issue_comment>username_2: This Dell site describes how to test to see that OMCI is installed. <http://en.community.dell.com/techcenter/systems-management/w/wiki/4573.omci-from-command-line> ``` PS C:\src\t> wmic /namespace:\\root\dcim\sysman path __namespace Node - CC0xxxxxx ERROR: Description = Invalid namespace ``` Are you required to dig into the DELL OMCI namespace? Would the following be of any use? ``` PS C:\src\t> Get-WmiObject -Namespace 'root\CIMV2' -Class CIM_BIOSElement SMBIOSBIOSVersion : 68ICE Ver. F.42 Manufacturer : Hewlett-Packard Name : Default System BIOS SerialNumber : 5CB3310PDF Version : HPQOEM - f ``` And, given that CIM is the future: ``` PS C:\src\t> Get-CimInstance -ClassName CIM_BIOSElement SMBIOSBIOSVersion : 68ICE Ver. F.42 Manufacturer : Hewlett-Packard Name : Default System BIOS SerialNumber : 5CB3310PDF Version : HPQOEM - f ``` Upvotes: 1
2018/03/16
646
2,165
<issue_start>username_0: From a CSS Working Group [post](https://wiki.csswg.org/ideas/dollar-variables?s[]=variables): > > Custom properties are used for more than just variables [...] > > > Contradicting what a lot of people think (see [this](https://stackoverflow.com/a/48887899/3537581)), this post confirms not only that variables and custom properties are not the same thing, but that custom properties may be used for other things than variables. My question is about the last part. Has anyone ever come across some CSS code where custom properties were used for something other than variables?<issue_comment>username_1: > > Why am I getting an error on null-valued expression? > > > The error message says `You cannot call a method on a null-valued expression.` The method on the null-valued expression is `$BIOSVersion.CompareTo($BIOSUpdateFileVersion)`. Thas is, because `$BIOSVersion` is an unassigned variable (`null`). Why is that? If you try to assign the variable, `Get-WmiObject` throws an error, because it couldn't find the namespace. So, in order to fix the second error, you have to assign the variable properly, hence find a proper namespace and/or WMI class. Upvotes: 0 <issue_comment>username_2: This Dell site describes how to test to see that OMCI is installed. <http://en.community.dell.com/techcenter/systems-management/w/wiki/4573.omci-from-command-line> ``` PS C:\src\t> wmic /namespace:\\root\dcim\sysman path __namespace Node - CC0xxxxxx ERROR: Description = Invalid namespace ``` Are you required to dig into the DELL OMCI namespace? Would the following be of any use? ``` PS C:\src\t> Get-WmiObject -Namespace 'root\CIMV2' -Class CIM_BIOSElement SMBIOSBIOSVersion : 68ICE Ver. F.42 Manufacturer : Hewlett-Packard Name : Default System BIOS SerialNumber : 5CB3310PDF Version : HPQOEM - f ``` And, given that CIM is the future: ``` PS C:\src\t> Get-CimInstance -ClassName CIM_BIOSElement SMBIOSBIOSVersion : 68ICE Ver. F.42 Manufacturer : Hewlett-Packard Name : Default System BIOS SerialNumber : 5CB3310PDF Version : HPQOEM - f ``` Upvotes: 1
2018/03/16
469
1,433
<issue_start>username_0: I want to use `regex` to test if a string is either a `-` followed by some intentional note, or is completely blank without whitespace of any kind. I tested it on [regex101.com](http://www.regex101.com) but I don't understand how the `$` symbol doesn't singlehandedly prevent the `\n` character from being a match. How can I adjust my statement to match my expectations? Thank you in advanced. ``` match(r"^(-.*|)$", "\n") is not None #returns True match(r"(^-.*$|^$)", "\n") is not None #returns True ```<issue_comment>username_1: The problem is your use of the `$` anchor. From [the docs](https://docs.python.org/3/library/re.html#regular-expression-syntax): > > '$' > > > Matches the end of the string **or just before the newline at the end of the string**, and in MULTILINE mode also matches before a > newline. > > > You have to use `\Z` instead, which matches only at the end of the string: ``` >>> re.match(r'^(-.*)?\Z', '\n') is None True ``` Or, alternatively, you could drop the anchors and use [`re.fullmatch`](https://docs.python.org/3/library/re.html#re.fullmatch): ``` >>> re.fullmatch(r'(-.*)?', '\n') is None True ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: You can try matching non-space character after `-` Regex `^\-\S+` will ``` -a <-- match - <-- no match -9ddd <-- match ``` See <https://regex101.com/r/56iMem/1> Upvotes: 0
2018/03/16
447
1,476
<issue_start>username_0: Looking for a quick answer. I've moved my scss and fonts folders from src/ to src/assets and I'm getting the below error: [![enter image description here](https://i.stack.imgur.com/UKu1e.png)](https://i.stack.imgur.com/UKu1e.png) The code in this folder is as below: ``` src: url('~assets/fonts/icomoon.eot?q72v19'); ``` I was like what... that is right! I've been scratching my head and though find I'll put my fonts folder back out of the assets folder so src/fonts. And changed the code above to: ``` src: url('~fonts/icomoon.eot?q72v19'); ``` Now I get the exact same message but for this path. So my original folder structure before I moved it into assets now doesn't work. And I'm stuck with a error. I should point out that this is using create-react-app and I have ejected so I had more control over the webpack configs. This is my folder structure I'm currently trying to make work: [![enter image description here](https://i.stack.imgur.com/zfKCe.png)](https://i.stack.imgur.com/zfKCe.png) I'm hoping this will be an easy fix as I'm wasting time on this..<issue_comment>username_1: Quick answer, delete everything and start from a fresh clone. Upvotes: -1 <issue_comment>username_2: **How I Solved**: Stopped the server then restarted and edited paths with errors. In terminal 1. **Press** `Ctrl+C` , 2. then `Y` 3. then `npm start` 4. then `Edit Paths` (pointing to new directory of moved files/folder) Upvotes: 0
2018/03/16
1,052
3,392
<issue_start>username_0: How to round a given number to a 0, 5 or 9, which ever is closest? I've tried with: ``` =MROUND(I2,5)-((MOD(MROUND(I2,5),10))=0) ``` But I need zeros too, this only gives me fives and nines. Thanks<issue_comment>username_1: ``` =1*(ROUNDDOWN(A2/10,0)&INDEX({0,5,9},MATCH(MIN(ABS({0,5,9}-MOD(A2,10))),ABS({0,5,9}-MOD(A2,10)),0))) ``` * MOD to get right digit of cell * Find the distance between {0,5,9} and the right digit and MATCH it to the closest digit * Concatenate right of the matched digit to the left digit and multiply by 1 --- Added a 0 to `ROUNDDOWN` to satisfy [excel](/questions/tagged/excel "show questions tagged 'excel'"), producing the following: ![here](https://i.stack.imgur.com/hWILP.png) ![plot](https://i.stack.imgur.com/EpEzc.jpg) Upvotes: 3 [selected_answer]<issue_comment>username_2: From what I understand of your obscure request, you need to take last whole digit and any fraction (so with `123.456` we'll be looking at `3.456`) and compare it to this chart: Interpretation: --------------- ***`If it is: Then it becomes:`*** `>= 0 And <2.5 0` `>= 2.5 And < 7 5` `>= 7 And < 9.5 9` `>= 9.5 and < 10 10 (zero rounded up)` Code: ----- ``` Function round059(num As Double) Dim L As Single, R As Single L = Int(num / 10) * 10 R = num - L If R >= 2.5 And R < 7 Then R = 5 Else If R >= 7 And R < 9.5 Then R = 9 Else If R >= 9.5 Then R = 10 Else R = 0 End If End If End If round059 = L + R End Function ``` Test: ----- ``` Sub test() Dim x As Double For x = -0.1 To 10.1 Step 0.2 Debug.Print " " & x & ":", , round059(x) Next x End Sub ``` Result: ------- ``` -0.1: 0 0.1: 0 0.3: 0 0.5: 0 0.7: 0 0.9: 0 1.1: 0 1.3: 0 1.5: 0 1.7: 0 1.9: 0 2.1: 0 2.3: 0 2.5: 5 2.7: 5 2.9: 5 3.1: 5 3.3: 5 3.5: 5 3.7: 5 3.9: 5 4.1: 5 4.3: 5 4.5: 5 4.7: 5 4.9: 5 5.1: 5 5.3: 5 5.5: 5 5.7: 5 5.9: 5 6.1: 5 6.3: 5 6.5: 5 6.7: 5 6.9: 5 7.1: 9 7.3: 9 7.5: 9 7.7: 9 7.9: 9 8.10000000000001: 9 8.3: 9 8.5: 9 8.7: 9 8.9: 9 9.1: 9 9.3: 9 9.5: 10 9.7: 10 9.9: 10 10.1: 10 ``` Ya? --- Upvotes: 2
2018/03/16
1,661
5,537
<issue_start>username_0: Okey so I have created some booleans and according to them I would like to set color of a button. So lets say I have ``` boolean test1 = true; boolean test2 = false; ``` True would be green and false would be red. And now I would like my program to set the colors to buttons according to booleans. ``` package com.example.kamil.tmpsadmin; import android.content.Intent; import android.support.v7.app.AppCompatActivity; import android.os.Bundle; import android.view.View; import android.widget.Button; import android.widget.TextView; public class MainActivity extends AppCompatActivity { String button1nazwa = "Kamil", button1register = "SJZ-RG78", button2nazwa = "Daniel", button2register = "SJZ-7782", button3nazwa = "Kajetan", button3register = "SJZ-6669", button4nazwa = "Szymon", button4register = "SJZ-GRA3", button5nazwa = "Bartek", button5register = "SJZ-MET2", button6nazwa = "Paweeł", button6register = "SJZ-KOZAK"; Boolean button1dostepnosc = true, button2dostepnosc = true, button3dostepnosc = false, button4dostepnosc = false, button5dostepnosc = false, button6dostepnosc = true; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); TextView textView = (TextView) findViewById(R.id.textView); textView.setText(button1nazwa + "\n" + button1register + "\n"); TextView textView2 = (TextView) findViewById(R.id.textView2); textView2.setText(button2nazwa + "\n" + button2register + "\n"); TextView textView3 = (TextView) findViewById(R.id.textView3); textView3.setText(button3nazwa + "\n" + button3register + "\n"); TextView textView4 = (TextView) findViewById(R.id.textView4); textView4.setText(button4nazwa + "\n" + button4register + "\n"); TextView textView5 = (TextView) findViewById(R.id.textView5); textView5.setText(button5nazwa + "\n" + button5register + "\n"); TextView textView6 = (TextView) findViewById(R.id.textView6); textView6.setText(button6nazwa + "\n" + button6register + "\n"); Button button = (Button) findViewById(R.id.button); button.setOnClickListener(new View.OnClickListener(){ public void onClick(View v){ startActivity(new Intent(MainActivity.this, Zmiana.class)); } }); } ```<issue_comment>username_1: ``` =1*(ROUNDDOWN(A2/10,0)&INDEX({0,5,9},MATCH(MIN(ABS({0,5,9}-MOD(A2,10))),ABS({0,5,9}-MOD(A2,10)),0))) ``` * MOD to get right digit of cell * Find the distance between {0,5,9} and the right digit and MATCH it to the closest digit * Concatenate right of the matched digit to the left digit and multiply by 1 --- Added a 0 to `ROUNDDOWN` to satisfy [excel](/questions/tagged/excel "show questions tagged 'excel'"), producing the following: ![here](https://i.stack.imgur.com/hWILP.png) ![plot](https://i.stack.imgur.com/EpEzc.jpg) Upvotes: 3 [selected_answer]<issue_comment>username_2: From what I understand of your obscure request, you need to take last whole digit and any fraction (so with `123.456` we'll be looking at `3.456`) and compare it to this chart: Interpretation: --------------- ***`If it is: Then it becomes:`*** `>= 0 And <2.5 0` `>= 2.5 And < 7 5` `>= 7 And < 9.5 9` `>= 9.5 and < 10 10 (zero rounded up)` Code: ----- ``` Function round059(num As Double) Dim L As Single, R As Single L = Int(num / 10) * 10 R = num - L If R >= 2.5 And R < 7 Then R = 5 Else If R >= 7 And R < 9.5 Then R = 9 Else If R >= 9.5 Then R = 10 Else R = 0 End If End If End If round059 = L + R End Function ``` Test: ----- ``` Sub test() Dim x As Double For x = -0.1 To 10.1 Step 0.2 Debug.Print " " & x & ":", , round059(x) Next x End Sub ``` Result: ------- ``` -0.1: 0 0.1: 0 0.3: 0 0.5: 0 0.7: 0 0.9: 0 1.1: 0 1.3: 0 1.5: 0 1.7: 0 1.9: 0 2.1: 0 2.3: 0 2.5: 5 2.7: 5 2.9: 5 3.1: 5 3.3: 5 3.5: 5 3.7: 5 3.9: 5 4.1: 5 4.3: 5 4.5: 5 4.7: 5 4.9: 5 5.1: 5 5.3: 5 5.5: 5 5.7: 5 5.9: 5 6.1: 5 6.3: 5 6.5: 5 6.7: 5 6.9: 5 7.1: 9 7.3: 9 7.5: 9 7.7: 9 7.9: 9 8.10000000000001: 9 8.3: 9 8.5: 9 8.7: 9 8.9: 9 9.1: 9 9.3: 9 9.5: 10 9.7: 10 9.9: 10 10.1: 10 ``` Ya? --- Upvotes: 2
2018/03/16
730
2,675
<issue_start>username_0: I am trying to make a button for my website, which has a changing color hover effect. It can simply be done in css in this way: ``` .button{ color: green; } .button:hover { background : green; color: black; } ``` However, I wanted to give this a nice transition using JQuery (since I'm trying to learn it). Here's one of the codes I've tried so far: ``` $(function() { $('.button').on('mouseover', function() { $(this).css({ "background" : "green", "color" : "black", }) }) }) ``` I've tried it in so many other ways, like using the "hover" function instead of "on(mouseover)", using the addClass method, and so on. But in all of those, nothing happens when I hover the mouse on the button. Is there any easy way to do this?<issue_comment>username_1: You can use [CSS transitions](https://www.w3schools.com/css/css3_transitions.asp) to smooth out the effect. There are jQuery libraries to handle this, but CSS is the better way. You'll need to [check for feature support](https://developer.mozilla.org/en-US/docs/Web/CSS/@supports) and use [jQuery transitions](https://api.jquery.com/category/effects/) only for browsers that need it. EDIT: @<NAME> since you insist on doing this with jQuery instead of native CSS methods, [you can use `.animate()`](https://api.jquery.com/animate/) with the [jQuery.Color plugin](https://github.com/jquery/jquery-color). But for browsers that support it, you are better off with CSS transitions. From the `.animate()` documentation: > > All animated properties should be animated to a single numeric value, except as noted below; most properties that are non-numeric cannot be animated using basic jQuery functionality (For example, width, height, or left can be animated but background-color cannot be, unless the jQuery.Color plugin is used). > > > Example usage from the jQuery.Color documentation: ``` jQuery("#go").click(function () { jQuery("#block").animate({ backgroundColor: "#abcdef" }, 1500); }); jQuery("#sat").click(function () { jQuery("#block").animate({ backgroundColor: jQuery.Color({ saturation: 0 }) }, 1500); }); ``` Upvotes: 0 <issue_comment>username_2: You can have nice transitions in css! A simple implementation with `transition` is below. More complicated effects can be achieved using `@keyframes` ```css .button { display: inline-block; padding: 10px 40px; background-color: #0f82e0; color: #ffffff; transition: background-color 0.3s; } .button:hover { background-color: #27b258; } ``` ```html I'm a button! ``` Upvotes: 3 [selected_answer]
2018/03/16
1,198
2,523
<issue_start>username_0: This is my data set ``` fake_abalone2 Sex Length Diameter Height Whole Shucked Viscera Shell Rings Weight Weight Weight Weight 0 M 0.455 0.365 0.095 0.5140 0.2245 0.1010 0.1500 15 1 M 0.350 0.265 0.090 0.2255 0.0995 0.0485 0.0700 7 2 F 0.530 0.420 0.135 0.6770 0.2565 0.1415 0.2100 9 3 M 0.440 0.365 0.125 0.5160 0.2155 0.1140 0.1550 10 4 K 0.330 0.255 0.080 0.2050 0.0895 0.0395 0.0550 7 5 K 0.425 0.300 0.095 0.3515 0.1410 0.0775 0.1200 8 ``` Getting syntax error while using the following method. Please help me out. I want the value in "sex" table to change depending on "Rings" table.If "Rings" value is less than 10 the corresponding "sex" value should be changed to 'K'.Otherwise, no change should be made in "Sex" table. ``` fake_abalone2["sex"]=fake_abalone2["Rings"].apply(lambda x:"K" if x<10) ``` > > File "", line 1 > fake\_abalone2["sex"]=fake\_abalone2["Rings"].apply(lambda x:"K" if x<10) > > > SyntaxError: invalid syntax > > ><issue_comment>username_1: You can use Python numpy instead of lambda function. Import python numpy using `import numpy as np` then you can use the following method to replace the string. ``` fake_abalone2['Sex'] = np.where(fake_abalone2['Rings']<10, 'K', fake_abalone2['Sex']) ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: The Following method works perfectly. ``` df1["Sex"]=df1.apply(lambda x: "K"if x.Rings<10 else x["Sex"],axis=1) ``` df1 is the dataframe ``` Sex Length Diameter Height Whole Shucked Viscera Shell Rings weight weight weight weight 0 M 0.455 0.365 0.095 0.5140 0.2245 0.1010 0.1500 15 1 K 0.350 0.265 0.090 0.2255 0.0995 0.0485 0.0700 7 2 K 0.530 0.420 0.135 0.6770 0.2565 0.1415 0.2100 9 3 M 0.440 0.365 0.125 0.5160 0.2155 0.1140 0.1550 10 4 K 0.330 0.255 0.080 0.2050 0.0895 0.0395 0.0550 7 5 K 0.425 0.300 0.095 0.3515 0.1410 0.0775 0.1200 8 6 F 0.530 0.415 0.150 0.7775 0.2370 0.1415 0.3300 20 ``` Upvotes: 2 <issue_comment>username_3: The main problem is the output of the lambda function: ``` .apply(lambda x:"K" if x<10) ``` The output is not certain for other conditions, so you can use **else** something ... ``` .apply(lambda x:"K" if x<10 else None) ``` Upvotes: 0
2018/03/16
797
1,967
<issue_start>username_0: I have an element in local storage with multiple elements, for simplicity, I will make the element: ``` * ``` The element is saved as a string and I want to manipulate the contents. Like such: ``` let local_storage_element = localStorage.getItem("val") $(local_storage_element+':last-child').append("something ") ``` No matter what selector I add after local\_storage\_element it will always append the value to the string not to the selected element(:last-child in this case) does anyone know how to append to a specific element within the string??<issue_comment>username_1: You can use Python numpy instead of lambda function. Import python numpy using `import numpy as np` then you can use the following method to replace the string. ``` fake_abalone2['Sex'] = np.where(fake_abalone2['Rings']<10, 'K', fake_abalone2['Sex']) ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: The Following method works perfectly. ``` df1["Sex"]=df1.apply(lambda x: "K"if x.Rings<10 else x["Sex"],axis=1) ``` df1 is the dataframe ``` Sex Length Diameter Height Whole Shucked Viscera Shell Rings weight weight weight weight 0 M 0.455 0.365 0.095 0.5140 0.2245 0.1010 0.1500 15 1 K 0.350 0.265 0.090 0.2255 0.0995 0.0485 0.0700 7 2 K 0.530 0.420 0.135 0.6770 0.2565 0.1415 0.2100 9 3 M 0.440 0.365 0.125 0.5160 0.2155 0.1140 0.1550 10 4 K 0.330 0.255 0.080 0.2050 0.0895 0.0395 0.0550 7 5 K 0.425 0.300 0.095 0.3515 0.1410 0.0775 0.1200 8 6 F 0.530 0.415 0.150 0.7775 0.2370 0.1415 0.3300 20 ``` Upvotes: 2 <issue_comment>username_3: The main problem is the output of the lambda function: ``` .apply(lambda x:"K" if x<10) ``` The output is not certain for other conditions, so you can use **else** something ... ``` .apply(lambda x:"K" if x<10 else None) ``` Upvotes: 0
2018/03/16
745
2,843
<issue_start>username_0: I need to persist state in Redux after refresh of my website. What is the easiest way to do this? I rather wouldn't like to use redux-persist if it possible.<issue_comment>username_1: Basically, you need two functions, **loadState()** and **saveState()**. ``` export const loadState = () => { try { const serializedState = localStorage.getItem("state"); if (!serializedState) return undefined; else return JSON.parse(serializedState); } catch(err) { return undefined; } }; export const saveState = (state) => { try { const serializedState = JSON.stringify(state); localStorage.setItem("state", serializedState); } catch(err) { console.log(err); } }; ``` You need the try/catch due some privacy settings on browsers. Then, you have to call `loadState` when you are initializing your store and call `saveState` on `store.subscribe()` to save the state to localStorage on every state change. Like this: ``` const persistedStore = loadState(); const store = createStore( // ... your reducers persistedStore ); store.subscribe(() => { saveState(store.getState()); }); ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: You can go to this tutorial [Persisting Redux State](https://egghead.io/lessons/javascript-redux-persisting-the-state-to-the-local-storage). It is made by <NAME> ( The man who made redux ). Just follow the steps and it will work for you. For Quick answer : 1. Add a localstorage.js file in you project ( if you have separate redux folder, make this file there, or else just make it in src folder ) 2. Paste this code in the file ``` export const loadState = () => { try { const serializedState = localStorage.getItem("state"); if (serializedState === null) { return undefined; } return JSON.parse(serializedState); } catch (err) { return undefined; } }; export const saveState = (state) => { try { const serializesState = JSON.stringify(state); localStorage.setItem("state", serializesState); } catch (err) { console.log(err); } }; ``` 3. Here I have stored the state in the local storage, you can also store it in cookie to make it inaccessible by the user. 4. Then if you are creating store in index.js file, call loadState and pass it as initial state in the createStore function. I have made separate store file so my code is a little different ``` const persistedState = loadState(); const Middlewares = [thunk]; const composeEnhancers = window.__REDUX_DEVTOOLS_EXTENSION_COMPOSE__ || compose; const store = createStore( rootReducer, persistedState, composeEnhancers(applyMiddleware(...Middlewares)) ); store.subscribe(() => { saveState(store.getState()); }); ``` 5. And that's all. You are done. Go to the browser and reload as many times you can. Upvotes: 1
2018/03/16
213
693
<issue_start>username_0: I have the following line of code ``` import javax.xml.bind.DatatypeConverter; ``` which is giving me a ``` javax.xml.bind cannot be resolved ``` where can I find the library file for this import?<issue_comment>username_1: The module **javax.xml.bind** has been renamed and is also deprecated as of **Java 9**. To workaround you can add `--add-modules java.xml.bind` to `javac` compilation. As suggested [here](https://stackoverflow.com/a/43574427/9428319) Or switch back to **Java 8** or older Upvotes: 4 [selected_answer]<issue_comment>username_2: If you have a maven build, add the following to your pom.xml ``` javax.xml.bind jaxb-api ``` Upvotes: 2
2018/03/16
608
2,145
<issue_start>username_0: I have different timestamps and a timezones coming from an API as an object. Example: ``` {{'ts' : 1521311400000}, {'tz' : 'GMT+01:00'}} ``` How can I convert the timestamp to a human readable Date in that timezone using `toLocaleTimeString()`? I have tried to pass the timezone inside the `options` object as the value for `timeZone` as stated [here](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/toLocaleTimeString) but I get an `invalid time zone in DateTimeFormat()` Error and cant figure out how the correct Format should be.<issue_comment>username_1: To get that kind of flexibility in formatting dates, you'll probably want to look into [Moment](https://momentjs.com/) and [Moment Timezone](https://momentjs.com/timezone/). Upvotes: 2 <issue_comment>username_1: If you'd like another option, you could try this: ``` function init() { function formatFixedTimeZone(ts, tz) { let offsetStr = tz.replace(/:/g, ''); let reverseOffset = offsetStr.replace(/[-+]/, sign => sign === '+' ? '-' : '+'); let time = new Date(ts); let timeStr = time.toUTCString().replace('GMT', reverseOffset); time = new Date(Date.parse(timeStr)); timeStr = time.toLocaleString('en-US', { timeZone: 'UTC', // Don't change this from UTC for other time zones. weekday: 'short', month: 'short', day: 'numeric', year: 'numeric', hour: '2-digit', minute: '2-digit', second: '2-digit' }) + ' ' + tz; return timeStr; } let timeDisplay = document.getElementById('time'); function tick() { let nowStr = formatFixedTimeZone(Date.now(), 'GMT+01:00'); timeDisplay.textContent = nowStr; } setInterval(tick, 1000); } ``` Plunker here: <http://plnkr.co/edit/Gk6SOLwpWqfoT5gHlCrb?p=preview> This displays a running count of the current time in a fixed time zone, but I've tried to write it in such a way that if you aren't using the current time, but the ts and tz values from your API, it shouldn't be too hard to adapt this to your needs and to different output formatting. Upvotes: 2 [selected_answer]
2018/03/16
1,165
4,812
<issue_start>username_0: I have an architectural problem. I'm using Django (with admin panel) and DRF (api with stateless authentication using JWT). Django has Admin users represented by model which is more or less the same as default Django User Model. Admins work with Django Admin only and can't use DRF api. DRF has API users that are allowed to use only api through DRF and can't interact with Django Admin or Django Session etc. I know that the best approach is to use multi model inheritance, like: ``` class User(DjangoUserModel): pass class Admin(User): pass class API(User): pass AUTH_USER_MODEL = "User" ``` but the problem is that, those users are completly different. Eg: API user has complex composite key as an username field which is impossible in combination to simple `Admin` username field. And many other differences. .. The question is: may I use a user object that is not an `AUTH_USER_MODEL` instance in DRF? So `self.request.user` will store a model instance that isn't connect in any way with an `AUTH_USER_MODEL`. Has any of you done something similar?<issue_comment>username_1: One thing that I immediately recall is how Mongoengine used to hack the whole django authentication system. Mongoengine < 0.10 has a [django compatibility app](https://github.com/MongoEngine/mongoengine/tree/0.9/mongoengine/django) that implements a crutch to store users in MongoDB and make them accessible via `self.request.user`. It has to use a crutch, because Django Session API is opinionated and assumes that you're using AUTH\_USER\_MODEL, backed by SQL database for storing your users. So, I think you should disable `SessionMiddleware` and CSRF token handling and use 2 distinct [custom authentication systems](https://docs.djangoproject.com/en/dev/topics/auth/customizing/) for Admin and API purposes. With TokenAuthentication or JWTAuthentication this should be doable. [Here's another example of a project](https://github.com/BurkovBA/django-rest-framework-mongoengine-example) with DRF and Mongoengine, with a [custom implementation of TokenAuthentication](https://github.com/BurkovBA/django-rest-framework-mongoengine-example/tree/master/project/users), backed by MongoDB. Upvotes: 2 <issue_comment>username_2: Since you are using Django and DRF, perhaps you can write an `APIUser` model which extends from [`AbstractBaseUser`](https://docs.djangoproject.com/en/2.0/topics/auth/customizing/#django.contrib.auth.models.AbstractBaseUser) with your customizations, write a [custom authentication class](http://www.django-rest-framework.org/api-guide/authentication/#example) and plug that into the [REST\_FRAMEWORK.DEFAULT\_AUTHENTICATION\_CLASSES](http://www.django-rest-framework.org/api-guide/settings/#default_authentication_classes) setting. Leave AUTH\_USER\_MODEL alone for the django admin. Your custom authentication may just need to override [`authenticate_credentials`](https://github.com/encode/django-rest-framework/blob/master/rest_framework/authentication.py#L192) (i've referenced the TokenAuthentication class in DRF github) and return an APIUser rather than the default defined in `settings.AUTH_USER_MODEL`. This will be a bit different because you're decoding a JWT, so you'll likely be extract some info from your JWT and looking up your APIUser by whatever means you need, such as your composite field. This should result in `self.request.user` being an APIUser for your DRF API views. You're API views should be using the rest framework's settings, and your Django admin should be using the regular django settings. There may be some other caveats, but generally you'll be ok with just this I think. Upvotes: 2 <issue_comment>username_3: Well, yes sir. You can do that. Look at the following [example](https://github.com/GetBlimp/django-rest-framework-jwt/issues/22): ``` from rest_framework_jwt.authentication import JSONWebTokenAuthentication class AuthenticatedServiceClient: def is_authenticated(self): return True class JwtServiceOnlyAuthentication(JSONWebTokenAuthentication): def authenticate_credentials(self, payload): # Assign properties from payload to the AuthenticatedServiceClient object if necessary return AuthenticatedServiceClient() ``` And in settings.py: ``` REST_FRAMEWORK = { 'UNAUTHENTICATED_USER': None, 'DEFAULT_AUTHENTICATION_CLASSES': ( 'myapp.authentication.JwtServiceOnlyAuthentication', ), } ``` If you want, you can define additional `DEFAULT_AUTHENTICATION_CLASSES` for your DRF. The authentication classes are just like middleware, just a queue that is populating `request.user`. Adding your own authentication classes that are using diffrent user model from `AUTH_USER_MODEL` will work exactly as you would except. Upvotes: 4 [selected_answer]
2018/03/16
514
1,781
<issue_start>username_0: I tried "T extends Array" but that doesnt work, ``` public class Foo { { ...} } public class Bar { { Foo f; <-- Error } } ``` From what I saw, I dont think its posible, but hey, I'm not a Java guru<issue_comment>username_1: No, you can't extend a class with `Array`. You can only extend Java types like classes, interfaces etc. (You can also not extend `int` since it is a primitive type). If you want a new type of array you have to create an array of that object e.g. `int[] intArray = new int[5];` or `Foo[] foo = new Foo[5];` [read here for why it is not allowed](https://stackoverflow.com/questions/24913612/why-java-does-not-allow-to-extend-array-type) Upvotes: 1 [selected_answer]<issue_comment>username_2: If you want to restrict the type variable `T` to array-of-references types, you can instead have `T` be the component type, and restrict the component type, and use `T[]` where you need the array type, i.e. instead of ``` public class Foo { T obj; } Foo f; ``` you would have ``` public class Foo { T[] obj; } Foo f; ``` If you want to allow `T` to be array-of-primitives types too, that's not possible without allowing non-array reference types too, because the only supertype of array-of-primitive types like `int[]`, `boolean[]`, etc. is `Object` (and perhaps also `Cloneable` and `Serializable`). There is no supertype of "just" primitive and reference arrays. The most restrictive you can do would be something like ``` public class Foo ``` and if you did that, you would not be able to do anything with `T` except the things that you can do to any `Object` anyway (note that neither the `Cloneable` nor `Serializable` interfaces provide any methods), so then you might as well not restrict `T` at all. Upvotes: 2
2018/03/16
695
2,509
<issue_start>username_0: Let's clarify three common scenarios when no item matches the request with a simple example: 1. `GET /posts/{postId}` and `postId` does not exist (status code 404, no question) 2. `GET /posts?userId={userId}` and the user with `userId` does not have any posts 3. `GET /posts?userId={userId}` and the user with `userId` does not exist itslef I know there's no strict REST guideline for the appropriate status codes for cases 2 and 3, but it seems to be a common practice to return `200` for case 2 as it's considered a "search" request on the `posts` resource, so it is said that `404` might not be the best choice. Now I wonder if there's a common practice to handle case 3. Based on a similar reasoning with case 2, `200` seems to be more relevant (and of course in the response body more info could be provided), although returning `404` to highlight the fact that `userId` itself does not exist is also tempting. Any thoughts?<issue_comment>username_1: No, you can't extend a class with `Array`. You can only extend Java types like classes, interfaces etc. (You can also not extend `int` since it is a primitive type). If you want a new type of array you have to create an array of that object e.g. `int[] intArray = new int[5];` or `Foo[] foo = new Foo[5];` [read here for why it is not allowed](https://stackoverflow.com/questions/24913612/why-java-does-not-allow-to-extend-array-type) Upvotes: 1 [selected_answer]<issue_comment>username_2: If you want to restrict the type variable `T` to array-of-references types, you can instead have `T` be the component type, and restrict the component type, and use `T[]` where you need the array type, i.e. instead of ``` public class Foo { T obj; } Foo f; ``` you would have ``` public class Foo { T[] obj; } Foo f; ``` If you want to allow `T` to be array-of-primitives types too, that's not possible without allowing non-array reference types too, because the only supertype of array-of-primitive types like `int[]`, `boolean[]`, etc. is `Object` (and perhaps also `Cloneable` and `Serializable`). There is no supertype of "just" primitive and reference arrays. The most restrictive you can do would be something like ``` public class Foo ``` and if you did that, you would not be able to do anything with `T` except the things that you can do to any `Object` anyway (note that neither the `Cloneable` nor `Serializable` interfaces provide any methods), so then you might as well not restrict `T` at all. Upvotes: 2
2018/03/16
463
1,988
<issue_start>username_0: I was making this simple python game where you guess a random number. I was wondering how you would loop it so it would keep going until you get the correct number. It needs to do that without changing the number while the player is guessing. ``` import random print('Welcome to Guess The Number V1.0') print('the rules are simple guess a number 1-6 and see if youre correct') number = random.randint(0,7) #------------------------------------------------------ guess = int(input('Enter a number:')) print(guess) #------------------------------------------------------ if guess > number: print('Your guess was to high') if guess < number: print('Your guess was to low') #------------------------------------------------------- if guess == number: print('Correct!') ```<issue_comment>username_1: You would use a `while` loop to repeat the guessing procedure like so: ``` import random print('Welcome to Guess The Number V1.0') print('the rules are simple guess a number 1-6 and see if youre correct') number = random.randint(0,7) #------------------------------------------------------ guess = int(input('Enter a number:')) while guess != number: #Following code only runs if the guess isn't correct if guess > number: print('Your guess was to high') if guess < number: print('Your guess was to low') guess = int(input('Enter a number:')) #Need this here to ask for another guess #and avoid an infinite loop print('Correct!') ``` Hope that helps! Upvotes: 0 <issue_comment>username_2: If you used a while loop you could make it run without needing to stop it and reload the site. it would look like this ``` while guess != number: if guess > number: print('Your guess was to high') if guess < number: print('Your guess was to low') #------------------------------------------------------- if guess == number: print('Correct!') ``` Upvotes: 1
2018/03/16
539
2,099
<issue_start>username_0: friends. I'm getting a syntax error, that is pointing at the closing curly bracket just above the render. It says that it's expecting a comma, but I don't understand why. All of the curly brackets have opening and closing brackets. What am I missing? ``` import React, {Component} from 'react'; import axios from 'axios'; class List extends Component { constructor(props){ super(props) this.state = { sports: [] } } componentWillMount(){ axios.get('my url is in here') .then((response) => { this.setState({ sports: response }) } } render(){ return( {this.state.sports} ) } } export default List; ```<issue_comment>username_1: You are missing a right parenthesis: ```js componentWillMount(){ axios.get('my url is in here') .then((response) => { this.setState({ sports: response }) }) // <-- this ) } } ``` Upvotes: 2 <issue_comment>username_2: You need to close the `.then()` as follows: ``` componentWillMount() { axios.get('my url is in here').then(response => { this.setState({ sports: response, }); }); //<--- here, a ) is needed } ``` Upvotes: 2 <issue_comment>username_3: ``` import React, {Component} from 'react'; import axios from 'axios'; class List extends Component { constructor(props){ super(props) this.state = { sports: [] } } componentWillMount(){ axios.get('my url is in here') .then((response) => { this.setState({ sports: response }) }) } render(){ return( {this.state.sports} ) } } export default List; your updated code.. you just miss the closing bracket in componentWillMount() method. componentWillMount(){ axios.get('my url is in here') .then((response) => { this.setState({ sports: response })enter code here }) // <-- this ) } } ``` Upvotes: 1
2018/03/16
473
1,997
<issue_start>username_0: I guess this is more of a philosophical question. I have one item with 4 variations. These variations will share about 80% of their columns. The item in question is a portfolio example and the variations are photography/video/web/graphic. They will share column names like client, date, etc. My question is, should I have 1 table for each variation or 1 base table and an additional table for variation-specific columns. I'm "never" going to need to pull down a photo/web example at the same time or what have you. I wasn't sure how to Google search for this despite it being fairly basic<issue_comment>username_1: As long as there are no specific rules for a variant id just create a Media table. If you decide to keep them seperated i'd expect tables to have a date and client\_id. Upvotes: 0 <issue_comment>username_2: You have a one-of relationship. The key question is how *other* tables will reference this entity. There are three options: 1. All references are to the parent entity. For instance, all items have price histories, categories, or comments and the references are only to the parent. 2. All references are to the children. For instance, photos might have objects in them, videos might have lists of producers, and so on. 3. References can be to either. What is the answer? (1) suggests that all the rows should be in a single table with a single id shared for all. (2) suggests that the rows should be in a single table for each variation. (3) is a little more difficult. I would recommend a parent table with the 80% of the columns. Then children tables with the remaining ones. The primary keys of the children tables would be the same as the primary key of the parent (yes, a column can be both a primary key and a foreign key). For this situation you need to do a little work to ensure that a given parent entity appears in exactly one child table. That can be handled with foreign keys and triggers (in MySQL). Upvotes: 2 [selected_answer]
2018/03/16
508
2,083
<issue_start>username_0: I fetch a list of users from the database with their details and I also fetch a list of blocked userID's from the database I just want to subtract the list of blocked user from the users array and and then load the screen so blocked user don't show up I have an array of userId's called "UsersBlockingCurrentUserArray" I am aiming for; users - UsersBlockingCurrentUserArray = newarray load newarray ``` func fetchUsers(){ //fetch users code get "user" self.users.append(user) self.users.sort(by: { (p1, p2) -> Bool in return p1.distanceFrom.compare(p2.distanceFrom) == .orderedAscending }) self.collectionView?.reloadData() } ```<issue_comment>username_1: As long as there are no specific rules for a variant id just create a Media table. If you decide to keep them seperated i'd expect tables to have a date and client\_id. Upvotes: 0 <issue_comment>username_2: You have a one-of relationship. The key question is how *other* tables will reference this entity. There are three options: 1. All references are to the parent entity. For instance, all items have price histories, categories, or comments and the references are only to the parent. 2. All references are to the children. For instance, photos might have objects in them, videos might have lists of producers, and so on. 3. References can be to either. What is the answer? (1) suggests that all the rows should be in a single table with a single id shared for all. (2) suggests that the rows should be in a single table for each variation. (3) is a little more difficult. I would recommend a parent table with the 80% of the columns. Then children tables with the remaining ones. The primary keys of the children tables would be the same as the primary key of the parent (yes, a column can be both a primary key and a foreign key). For this situation you need to do a little work to ensure that a given parent entity appears in exactly one child table. That can be handled with foreign keys and triggers (in MySQL). Upvotes: 2 [selected_answer]
2018/03/16
831
3,437
<issue_start>username_0: I've been struggling for quite some time now with trying to maintain a list of objects when the ViewModel is submitted back to the controller. The ViewModel receives the list of objects just fine, but when the form is submitted back to the controller the list is empty. All of the non-collection properties are available in the controller, so I'm not sure what the issue is. I have already read the guide a few people have referenced from <NAME> [here](http://www.hanselman.com/blog/ASPNETWireFormatForModelBindingToArraysListsCollectionsDictionaries.aspx) From what I can see, everyone solves this by building an ActionResult and letting the model binder map the collection to a parameter: Controller: ``` [HttpPost] public ActionResult Submit(List variables) { } ``` View: ``` @for (int i = 0; i < Model.ConfigurationVariables.Count; i++) { } ``` What I really want is to be able to pass my ViewModel back to the controller, including the ConfigurationVariables List: Controller: ``` [HttpPost] public ActionResult Submit(ReportViewModel report) //report.ConfigurationVariables is empty { } ``` View: ``` @for (int i = 0; i < Model.ConfigurationVariables.Count; i++) { @Html.LabelFor(model => model.ConfigurationVariables[i].Name, new { @class = "form-control" }) @Html.TextBoxFor(model => model.ConfigurationVariables[i].Value, new { @class = "form-control" }) } ``` This will be a complicated form and I can't just put every collection into the ActionResult parameters. Any help would be greatly appreciated<issue_comment>username_1: You need to hold the Name property in a hidden input so that it's submitted. Label values are lost. ``` @Html.HiddenFor(model => model.ConfigurationVariables[i].Name) ``` Upvotes: 1 <issue_comment>username_2: Alright, based on your comment you won't be able to utilize mvc's form binding. No worries. Instead of this controller definition: ``` public ActionResult Submit(List variables) ``` Use one of these two: ``` public ActionResult Submit() public ActionResult Submit(FormCollection submittedForm) ``` In the first you can access the Request object, you'll need to debug and locate your variable, then you'll need some logic to parse it apart and used the values submitted. In the second, the form collection will be made up of all the INPUT elements in your form. You will be able to parse through them directly on the object without interference from the other attributes of the Request object. In both cases you will probably need to use @Html.TextBox, and not TextBoxFor, and you will need to dynamically populate your dropdowns in your view. I'm not 100% sure about the Request object, but for sure on the FormCollection you will need to create an Input element for each value/collection you want submitted. Including hidden inputs for your textboxes Your textboxes will need to be SelectListItem collections. those require a key and a value, and when they are submitted you can loop through the collection and check the .Selected attribute. I would try with FormCollection first, and if that doesn't work fall back to the Request object. Also note: you are not getting a viewmodel back from the form submission, you will need to rebuild it from the form elements. If you want to post prepopulated data to the view you will need to build a view model and do appropriate parsing on the view to display it. Upvotes: 0
2018/03/16
671
2,379
<issue_start>username_0: I upgraded my app from spring boot 1.5.9.RELEASE to 2.0.0.RELEASE, and I can no longer import `org.springframework.boot.context.embedded.LocalServerPort`. I was using this to inject the port the server is running on during a test: ``` public class Task1Test { @LocalServerPort private int port; ``` The [Spring release notes](https://github.com/spring-projects/spring-boot/wiki/Spring-Boot-1.5-Release-Notes) do not mention this removal and [@LocalServerPort was not deprecated](http://www.atetric.com/atetric/javadoc/org.springframework.boot/spring-boot/1.5.9.RELEASE/org/springframework/boot/context/embedded/LocalServerPort.html). Is there an equivalent in Spring Boot 2.0 that I can use? **Edit**: I'm pretty sure that the class is gone. I'm getting these compilation errors: ``` [ERROR] ... Task1Test.java:[12,49]package org.springframework.boot.context.embedded does not exist [ERROR] ... Task1Test.java:[46,6] cannot find symbol symbol: class LocalServerPort ```<issue_comment>username_1: It looks like it was moved to [org.springframework.boot.web.server.LocalServerPort](https://docs.spring.io/spring-boot/docs/current-SNAPSHOT/api/org/springframework/boot/web/server/LocalServerPort.html) without notice. Hope that helps. Upvotes: 6 [selected_answer]<issue_comment>username_2: It seems it's been moved to `spring-boot-starter-web` dependency as per this [API](https://docs.spring.io/spring-boot/docs/current-SNAPSHOT/api/org/springframework/boot/web/server/LocalServerPort.html) documentation. Try adding this maven dependency to see if that fixes it ``` org.springframework.boot spring-boot-starter-web 2.0.0.RELEASE ``` Upvotes: 2 <issue_comment>username_3: @LocalServerPort is now in ***org.springframework.boot.test.web.server.LocalServerPort*** Update your imports in the test code. Relevant link to the Spring boot docs here <https://docs.spring.io/spring-boot/docs/current/api/org/springframework/boot/test/web/server/LocalServerPort.html> the change is notified here <https://docs.spring.io/spring-boot/docs/current/api/org/springframework/boot/web/server/LocalServerPort.html> Note that if you are using the Spring Boot Getting Started guides, the guides are still using the old namespace and hence you will see a build error without an option to fix. You will need to update this manually. Upvotes: 3
2018/03/16
497
1,844
<issue_start>username_0: I have two dates. One of them is the current date, one of them is a date somebody uploaded something, this is stored in a database. I need to find out if the date stored in the database is older than 7 days from the current date. I'm using PHP's `date(d/m/y);`, I've tried some things online, I've tried dateDifference() from php.net, I've tried converting them to timestamps and taking them away, but neither of these seem to work. Is there a simpler way?<issue_comment>username_1: It looks like it was moved to [org.springframework.boot.web.server.LocalServerPort](https://docs.spring.io/spring-boot/docs/current-SNAPSHOT/api/org/springframework/boot/web/server/LocalServerPort.html) without notice. Hope that helps. Upvotes: 6 [selected_answer]<issue_comment>username_2: It seems it's been moved to `spring-boot-starter-web` dependency as per this [API](https://docs.spring.io/spring-boot/docs/current-SNAPSHOT/api/org/springframework/boot/web/server/LocalServerPort.html) documentation. Try adding this maven dependency to see if that fixes it ``` org.springframework.boot spring-boot-starter-web 2.0.0.RELEASE ``` Upvotes: 2 <issue_comment>username_3: @LocalServerPort is now in ***org.springframework.boot.test.web.server.LocalServerPort*** Update your imports in the test code. Relevant link to the Spring boot docs here <https://docs.spring.io/spring-boot/docs/current/api/org/springframework/boot/test/web/server/LocalServerPort.html> the change is notified here <https://docs.spring.io/spring-boot/docs/current/api/org/springframework/boot/web/server/LocalServerPort.html> Note that if you are using the Spring Boot Getting Started guides, the guides are still using the old namespace and hence you will see a build error without an option to fix. You will need to update this manually. Upvotes: 3
2018/03/16
502
1,655
<issue_start>username_0: I try to generate 300k urls with python in a text file: ``` with open(r'somefile.txt') as f_out: for i in range(100, 120): f_out.write(r'www.website.com/{}\n'.format(i)) ``` But the result is: ``` www.website.com/ 101www.websitecom/ 102www.website.com/ ``` etc<issue_comment>username_1: It looks like it was moved to [org.springframework.boot.web.server.LocalServerPort](https://docs.spring.io/spring-boot/docs/current-SNAPSHOT/api/org/springframework/boot/web/server/LocalServerPort.html) without notice. Hope that helps. Upvotes: 6 [selected_answer]<issue_comment>username_2: It seems it's been moved to `spring-boot-starter-web` dependency as per this [API](https://docs.spring.io/spring-boot/docs/current-SNAPSHOT/api/org/springframework/boot/web/server/LocalServerPort.html) documentation. Try adding this maven dependency to see if that fixes it ``` org.springframework.boot spring-boot-starter-web 2.0.0.RELEASE ``` Upvotes: 2 <issue_comment>username_3: @LocalServerPort is now in ***org.springframework.boot.test.web.server.LocalServerPort*** Update your imports in the test code. Relevant link to the Spring boot docs here <https://docs.spring.io/spring-boot/docs/current/api/org/springframework/boot/test/web/server/LocalServerPort.html> the change is notified here <https://docs.spring.io/spring-boot/docs/current/api/org/springframework/boot/web/server/LocalServerPort.html> Note that if you are using the Spring Boot Getting Started guides, the guides are still using the old namespace and hence you will see a build error without an option to fix. You will need to update this manually. Upvotes: 3
2018/03/16
415
1,495
<issue_start>username_0: I have two UILabels in a UIStackView. Is it possible for the UIStackView to get focus and gestures in tvOS?<issue_comment>username_1: It looks like it was moved to [org.springframework.boot.web.server.LocalServerPort](https://docs.spring.io/spring-boot/docs/current-SNAPSHOT/api/org/springframework/boot/web/server/LocalServerPort.html) without notice. Hope that helps. Upvotes: 6 [selected_answer]<issue_comment>username_2: It seems it's been moved to `spring-boot-starter-web` dependency as per this [API](https://docs.spring.io/spring-boot/docs/current-SNAPSHOT/api/org/springframework/boot/web/server/LocalServerPort.html) documentation. Try adding this maven dependency to see if that fixes it ``` org.springframework.boot spring-boot-starter-web 2.0.0.RELEASE ``` Upvotes: 2 <issue_comment>username_3: @LocalServerPort is now in ***org.springframework.boot.test.web.server.LocalServerPort*** Update your imports in the test code. Relevant link to the Spring boot docs here <https://docs.spring.io/spring-boot/docs/current/api/org/springframework/boot/test/web/server/LocalServerPort.html> the change is notified here <https://docs.spring.io/spring-boot/docs/current/api/org/springframework/boot/web/server/LocalServerPort.html> Note that if you are using the Spring Boot Getting Started guides, the guides are still using the old namespace and hence you will see a build error without an option to fix. You will need to update this manually. Upvotes: 3
2018/03/16
1,323
2,371
<issue_start>username_0: There is an one-dimensional array, for instance, as shown in the following. Are there any functions that can transform this array into another array, which only keeps the top 5 elements of the existing array. These five kept elements are marked as `5, 4,3,2,1` based on their respective numerical values, and other elements are just marked as `0`. ``` 9.00E-05 8.74E-05 -6.67E-05 -0.000296984 -0.00016961 -7.49E-06 -0.000102942 -0.000183901 0.000206149 5.62E-05 0.000112588 5.93E-05 9.85E-05 -2.29E-05 5.08E-05 0.00015748 ```<issue_comment>username_1: Here is one solution from `rank` ``` s=df.rank(ascending=False) s.mask(s>5,0).astype(int) Out[74]: 0 5 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 1 9 0 10 3 11 0 12 4 13 0 14 0 15 2 Name: val, dtype: int32 ``` Upvotes: 1 <issue_comment>username_2: One way is to use `numpy`. We assume your array is held in variable `arr`. ``` args = arr.argsort() arr[args[-5:]] = range(5, 0, -1) arr[args[:-5]] = 0 # array([ 5., 0., 0., 0., 0., 0., 0., 0., 1., 0., 3., 0., 4., # 0., 0., 2.]) ``` Upvotes: 1 [selected_answer]<issue_comment>username_3: If you want the numbers to remain in the same order and obtain an array of tuples with the original number and rank, you could do this: ``` numbers = [ 9.00E-05, 8.74E-05, -6.67E-05, -0.000296984, -0.00016961, -7.49E-06, -0.000102942, -0.000183901, 0.000206149, 5.62E-05, 0.000112588, 5.93E-05, 9.85E-05, -2.29E-05, 5.08E-05, 0.00015748] ranks = { n:max(5-i,0) for (i,n) in enumerate(sorted(numbers)) } tagged = [ (n,ranks[n]) for n in numbers ] # tagged will contain : [(9e-05, 0), (8.74e-05, 0), (-6.67e-05, 1), (-0.000296984, 5), (-0.00016961, 3), (-7.49e-06, 0), (-0.000102942, 2), (-0.000183901, 4), (0.000206149, 0), (5.62e-05, 0), (0.000112588, 0), (5.93e-05, 0), (9.85e-05, 0), (-2.29e-05, 0), (5.08e-05, 0), (0.00015748, 0)] ``` if the original order doesn't matter, you only need this: ``` tagged = [ (n,max(5-i,0)) for (i,n) in enumerate(sorted(numbers)) ] # then tagge will be : [(-0.000296984, 5), (-0.000183901, 4), (-0.00016961, 3), (-0.000102942, 2), (-6.67e-05, 1), (-2.29e-05, 0), (-7.49e-06, 0), (5.08e-05, 0), (5.62e-05, 0), (5.93e-05, 0), (8.74e-05, 0), (9e-05, 0), (9.85e-05, 0), (0.000112588, 0), (0.00015748, 0), (0.000206149, 0)] ``` Upvotes: 1
2018/03/16
391
1,141
<issue_start>username_0: How do you count the amount of zero rows of a numpy array? ``` array = np.asarray([[1,1],[0,0],[1,1],[0,0],[0,0]]) ``` -> has three rows with all zero's, hence should give 3 Took me sometime to figure out this one and also couldn't find an answer on SO<issue_comment>username_1: ``` import numpy as np array = np.array([[1,1],[0,0],[1,1],[0,0],[0,0]]) non_zero_rows = np.count_nonzero((array != 0).sum(1)) # gives 2 zero_rows = len(array) - non_zero_rows # gives 3 ``` Any better solutions? Upvotes: 1 <issue_comment>username_2: You could also leverage the "truthiness" of non-zero values in an array. ``` np.sum(~array.any(1)) ``` i.e., sum the rows where none of the values in said row are truthy (and hence are all zero) Upvotes: 4 [selected_answer]<issue_comment>username_3: You can use `np.all` if you're sure that the rows will have all zeros. ``` # number of rows which has non-zero elements In [33]: sum(np.all(arr, axis=1)) Out[33]: 2 # invert the mask to get number of rows which has all zero as its elements In [34]: sum(~np.all(arr, axis=1)) Out[34]: 3 ``` Upvotes: 2
2018/03/16
172
708
<issue_start>username_0: Not to be confused with wanting to know how use strict works, but why is strict mode activated with `'use strict';` or `"use strict";` as opposed to an expression like `use strict;`?<issue_comment>username_1: `use strict;` would throw an error in JS engines that did not support strict mode. Using a string is backwards compatible. Upvotes: 5 [selected_answer]<issue_comment>username_2: I think the answer is simple. As we know, ECMAScript a.k.a javascript is not standarized until now. So why string? It's because all browsers agree to use `'use strict'` as more preferable than `use strict`. It also to prevent error on older browser which is not support strict mode. Upvotes: -1
2018/03/16
727
2,858
<issue_start>username_0: I just learned about React Fragments. I understand that fragments are slightly more efficient by creating less tree nodes and makes it cleaner when looking at inspector, but then why should we ever use as containers in React components? Should we always just use React.Fragments in our components? Will using fragments make it more difficult for styling? (I'm not sure as I haven't tried it out myself yet).<issue_comment>username_1: In most common practices, React only allows to render single <> element. If you want to render multiple elements, you could surround them by or . Fragment situation example: One of the most important usage of Fragment is in the tag (as stated in the official website's example [Fragment](https://reactjs.org/docs/fragments.html)), in a situation that you wanted to render multiple `| | |` rows in an element, you might find that it's impossible to wrap them around , since inside the tag you can only have `| | |` element! Well, in the other situation, I recommend to always use single element render, unless you have to. Upvotes: 3 <issue_comment>username_2: Using `<>` is just bad practice IMO. I recommend only using when you **require** the children to be placed inside an parent element. Otherwise it is pointless/excess html that is just not required. For example we have a classname associated with the Div element. ```html Hello Shippers ============== Hello Matey =========== Hello Pal ========= Hello Mucka =========== Hello Geeza =========== Hello Dude ========== ``` I don't recommend using the following example as the Div elements serve no purpose apart from a readability perspective. ```html Hello Shippers ============== Hello Matey =========== Hello Pal ========= Hello Mucka =========== Hello Geeza =========== Hello Dude ========== Hello Shippers ============== Hello Matey =========== Hello Pal ========= Hello Mucka =========== Hello Geeza =========== Hello Dude ========== ``` If you need to return multiple elements which dont need a parent, you can use this approach. ``` import React, { Fragment } from 'react' const names = ['Shippers', 'Matey', 'Pal', 'Mucka', 'Geeza', 'Dude'] export const Headers = () => ( {names.map(name => {`Hello ${name}`} ================= )} ) ``` Then you can import it to the renderer of wherever like. ``` import React from 'react' import { Headers } from './Headers' export const Body = () ( ) ``` This will transpile into ```html Hello Shippers ============== Hello Matey =========== Hello Pal ========= Hello Mucka =========== Hello Geeza =========== Hello Dude ========== ``` Upvotes: -1 <issue_comment>username_3: You can use `React.fragments` when you have to get props, like, `<\React.fragments>`. If you don't need props, you can use `<> <\>`. Upvotes: 1
2018/03/16
820
2,622
<issue_start>username_0: I am trying to create a dot plot using `geom_dotplot` of `ggplot2`. However, as shown in the examples on [this page](http://ggplot2.tidyverse.org/reference/geom_dotplot.html), the scales of y-axis range from 0 to 1. I wonder how I can change the y-axis scale so the values reflect the actual count of the data.<issue_comment>username_1: I would recommend you to use `geom_histogram` instead. ``` library(ggplot2) ggplot(mtcars, aes(x = mpg)) + geom_histogram(binwidth=1) ``` The issue seem to be in that `geom_dotplot` cannot be converted to count, as seen in the github issue [here](https://github.com/tidyverse/ggplot2/issues/2203). Upvotes: 1 <issue_comment>username_2: Here is an example which might be helpful. ``` library(ggplot2) library(ggExtra) library(dplyr) # use the preloaded iris package in R irisdot <- head(iris["Petal.Length"],15) # find the max frequency (used `dplyr` package). Here n is the label for frequency returned by count(). yheight <- max(dplyr::count(irisdot, Petal.Length)["n"]) # basic dotplot (binwidth = the accuracy of the data) dotchart = ggplot(irisdot, aes(x=Petal.Length), dpi = 600) binwidth = 0.1 dotsize = 1 dotchart = dotchart + geom_dotplot(binwidth=binwidth, method="histodot", dotsize = dotsize, fill="blue") # use coor_fixed(ratio=binwidth*dotsize*max frequency) to setup the right y axis height. dotchart = dotchart + theme_bw() + coord_fixed(ratio=binwidth*dotsize*yheight) # tweak the theme a little bit dotchart = dotchart + theme(panel.background=element_blank(), panel.border = element_blank(), panel.grid.minor = element_blank(), # plot.margin=unit(c(-4,0,-4,0), "cm"), axis.line = element_line(colour = "black"), axis.line.y = element_blank(), ) # add more tick mark on x axis dotchart = dotchart + scale_x_continuous(breaks = seq(1,1.8,0.1)) # add tick mark on y axis to reflect frequencies. Note yheight is max frequency. dotchart = dotchart + scale_y_continuous(limits=c(0, 1), expand = c(0, 0), breaks = seq(0, 1,1/yheight), labels=seq(0,yheight)) # remove x y lables and remove vertical grid lines dotchart = dotchart + labs(x=NULL, y=NULL) + removeGridX() dotchart ``` [![A dotplot for 15 iris petal lengths](https://i.stack.imgur.com/pcu3X.png)](https://i.stack.imgur.com/pcu3X.png) I don't know why it works. It seems that the height of y axis for geom\_dotplot is 1. The ratio between x and y was setup by coor\_fixed(ratio=binwidth \* dotsize \* max frequency). Upvotes: 3
2018/03/16
1,910
6,823
<issue_start>username_0: I'm trying to solve this excercise but I can't think of a solution. I need to check if a list is palindrome, taking these considerations: If the list is simple I just need to check if it's palindrome horizontally, but if it's a nested list I need to check both, vertical and horizontal. I also need to keep in mind that each element inside the list has to be palindrome by itself, for example: **A = [1,2,3,3,2,1]** is palindrome for this case I just created a single function that uses **reverse** : ``` unidimensional:: (Eq a) => [a] -> Bool unidimensional [] = error"List is empty." unidimensional xs = xs == reverse xs ``` also this case, for example: **B = [[1,2,1],[1,2,1]]** horizontally palindrome and to check if it's palindrome vertically I just transpose it, **[[1,1],[2,2],[1,1]]** and the result is, yes, it is palindrome both ways. I solved this by using the function transpose to evaluate vertically if it is palindrome and then, using the function unidimensional that I used before I check if it's palindrome horizontally, everything fine: ``` --Checks if it's palindrome horizontally and vertically bidimensional:: (Eq a) => [[a]] -> Bool bidimensional [[]] = error"Empty List." bidimensional (xs) = if left_right xs && up_down xs then True else False --Checks if it's palindrome horizontally: left_right (x:xs) = if x == reverse x then if xs /= [] then bidimensional xs else True else False --Checks if it's palindrome vertically: up_down:: (Eq a) => [[a]] -> Bool up_down (xs) = left_right ys where ys = transpose xs transpose:: [[a]]->[[a]] transpose ([]:_) = [] transpose x = (map head x) : transpose (map tail x) ``` the problem is here: The input my program needs to receive has to be something like this: ``` > palindrome [[1,2,2,1], [3,7,9,9],[3,7,9,9], [1,2,2,1]] ``` My problem is: my function palindrome should receive as a parameter a list **[a]**, but palindrome should work for nested lists, like **[[[a]]]** or **[[a]]** palindrome is the function that takes the input. The thing is that when I get a simple list, my head, which is **x** is a number, and **xs** which is the tail, would be the rest of the numbers, and that's ok, but when palindrome receives a nested list, for example **[[[2,2],[2,2]],[[1,1],[1,1]]]** the head, **x** is now **[[2,2],[2,2]]**, so I can't use my function like, bidimensional because **[[2,2],[2,2]]** is not a list anymore, when I try to call bidimensional with **xs** which is **[[2,2],[2,2]]** I get an error: **xs** is now type **a** and not **[[a]]** **My question is**: How can I make my function palindrome, work with any type of list (simple and nested) taking into account the error I mentioned before. Thanks in advance!<issue_comment>username_1: > > My question is: How can I make my function palindrome, work with any type of list (simple and nested) taking into account the error I mentioned before. Thanks in advance! > > > Let's think about this hypothetical `palindrome` function and try to figure out what type signature it would need. (Thinking about a function in terms of its type signature always helps me.) Suppose `palindrome` has a signature `palindrome :: [x] -> Bool` We what the following statements to be true: ``` palindrome [1, 2, 1] === True palindrome [1, 2, 2] === False palindrome [[1, 2, 1], [3, 4, 3], [1, 2, 1]] === True palindrome [[1, 2, 2], [3, 4, 3], [1, 2, 2]] === False ``` In the first two assertions, `palindrome` specializes to `[Integer] -> Bool`, so `x` is `Integer` in those cases. The only sensible implementation with `x === Integer` is simply to check if the supplied list of integers is a palindrome, i.e., check that the first element equals the last, pop those off and repeat (or, equivalently, check that `xs` equals `reverse xs` as you have). We can use this algorithm whenever `x` is an instance of `Eq`. In the last two properties, `palindrome` specializes to `[[Integer]] -> Bool`, so `x` is `[Integer]` there. It seems like we should be able to detect that `x` is itself a list of integers, and then know that we need to recursively apply palindrome to each inner list. However, Haskell polymorphism doesn't work that way. In order for a function to be polymorphic over a type parameter `x`, we need to define a single implementation of that function that works the same way *no matter what type `x` happens to be*. In other words, to define a polymorphic function `palindrome :: [x] -> Bool`, our implementation cannot know anything about the type parameter `x`. This forces us to use the same implementation when `x === [Integer]` as we used when `x === Integer`, which would evaluate to `True` instead of `False` for the final test case. You won't be able to make your `palindrome` function work the way you want it to for nested lists in Haskell if you insist on the input being standard `[]` types. One thing you might do is have your `palindrome` function take an extra parameter of type `Int` that tells the function how deeply to check. In that case, you'd need to know ahead of time how deeply nested your input is. Another thing you might do is write your `palindrome` function to take input in some other data structure besides `[]`, like a `Tree`, or something with arbitrary nesting. Maybe you can write your function to accept a `Value`, a type that represents arbitrary JSON values from the popular library [Aeson](https://hackage.haskell.org/package/aeson-1.3.0.0/docs/Data-Aeson-Types.html). *Note:* It's probably a good idea for `palindrome []` to be `True`, no? Upvotes: 0 <issue_comment>username_2: You could use a typeclass. ``` class Palindrome a where palindrome :: a -> Bool instance Palindrome Integer where palindrome _ = True instance (Eq a, Palindrome a) => Palindrome [a] where palindrome xs = xs == reverse xs && all palindrome xs ``` For a two-dimensional list, you are checking if the list is a palindrome, "both horizontally and vertically". This can be done without the `transpose` function: the list is a palindrome "vertically" if the list is a palindrome, and the list is a palindrome "horizontally" if every sublist is a palindrome. The second `instance` checks both. For a one-dimensional list, the second `instance` simply checks if the list is a palindrome. In this case, the `&& all palindrome xs` might as well not be there: since the first `instance` specifies that `Integer`s are always palindromes, `all palindrome xs` always evaluates to `True`. This can be seen as the "base case" for this recursive algorithm. This will work for any depth of nested lists. You can also instantiate the class for other base data types, or (I'm pretty sure) even for ALL types of class `Eq` (although this leads to overlapping instances which is its own can of worms). Upvotes: 1
2018/03/16
1,052
3,538
<issue_start>username_0: I am trying to get the total minutes of a Local Time Object in my jtable. As I didnt find any method I tried to convert it to a string and then convert the string to minutes. The lenght of the nanoseconds is always changing depending of the table inputs and throws then a DateTimeParseException. How can I convert it without parse Exception ? Here is the code I use: ``` String pause = String.valueOf(table.getValueAt(zeile, 4)); DateTimeFormatter formatter = DateTimeFormatter.ofPattern("HH:mm:ss.SSSSSS"); LocalTime localTime1 = LocalTime.parse(pause, formatter); ``` Throws the following error: ``` Exception in thread "AWT-EventQueue-0" java.time.format.DateTimeParseException: Text '00:21:07.480781500' could not be parsed, unparsed text found at index 15 ```<issue_comment>username_1: Try ``` DateTimeFormatter formatter = DateTimeFormatter.ofPattern("HH:mm:ss.SSSSSSSSS"); ``` The time passed to the parse method has 9 characters for the milliseconds but your format string only specified 6. Upvotes: 0 <issue_comment>username_2: With this format, you don't need a formatter, just do: ``` LocalTime localTime1 = LocalTime.parse(pause); ``` All java.time classes can directly parse a string if it's in ISO8601 format, which is the case: <https://en.m.wikipedia.org/wiki/ISO_8601> Upvotes: 2 <issue_comment>username_3: When parsing the string the Format is not diferent from the value I get with String.valueOf()-Method. I managed to get the minutes like this: ``` String pause = String.valueOf(table.getValueAt(zeile, 4)); if (pause.length()>5) { pause = pause.substring(0,pause.indexOf(':') + 3); } String[] split = pause.split(":"); int hours = Integer.parseInt(split[0]); int minutes = Integer.parseInt(split[1]); int result = hours * 60 + minutes; System.out.println("result: " + result); ``` Is there a more elegant way to do this or is this correct ? Upvotes: 0 <issue_comment>username_4: I suspect that your 00:21:07.480781500 is really the length of your break or pause, so a duration, an amount of time. If so, you are really using the wrong class in your table cell. * A `Duration` as for a duration, an amount of time, like 21 minutes and some seconds. * A `LocalTime` is for a time of day from 00:00:00 through 23:59:59.999999999. If I understand what you mean, getting the total minutes from a duration makes sense, getting them from a `LocalTime` not really. The challenge with a `Duration` is formatting it nicely for the user. However this has been addressed in a number of questions and answers already. So you may find a solution [in this question: How to format a duration in java? (e.g format H:MM:SS)](https://stackoverflow.com/questions/266825/how-to-format-a-duration-in-java-e-g-format-hmmss) and/or search for other questions and answers. I think you need to subclass `DefaultTableCellRenderer` to display the formatted `Duration` in your table. In return getting the minutes is straightforward: ``` Duration pause = (Duration) table.getValueAt(zeile, 4); long result = pause.toMinutes(); System.out.println("result: " + result); ``` This printed: > > result: 21 > > > If you really meant a time of day, you should of course keep your `LocalTime` objects. Then the answer is: ``` LocalTime pause = (LocalTime) table.getValueAt(zeile, 4); int result = pause.get(ChronoField.MINUTE_OF_DAY); System.out.println("result: " + result); ``` The output is the same: > > result: 21 > > > Upvotes: 1 [selected_answer]
2018/03/16
1,130
3,718
<issue_start>username_0: I am trying to create a trigger in Oracle whereby we move deleted records to another table. So when deleted column is set to 1, it should move the records from the patient\_table to the deleted\_patient\_table. Can you please help :) ``` CREATE TABLE Patient_Table( PatientID NUMBER(6) Primary Key, Title char(4) NOT NULL, Forename varchar2(20) NOT NULL, Surname varchar2(20) NOT NULL, Gender char(1) NOT NULL CHECK (Gender in ('M','F')), DOB date NOT NULL, TelNo varchar(12) NOT NULL, Conditions varchar(200) NOT NULL, Deleted Number(1) NOT NULL CHECK (Deleted in ('0','1')); ``` -- Table that should contain deleted records -- ``` Create Table Deleted_Patient_table( PatientID NUMBER(6) Primary Key, Title char(4) NOT NULL, Forename varchar2(20) NOT NULL, Surname varchar2(20) NOT NULL, Gender char(1) NOT NULL CHECK (Gender in ('M','F')), DOB date NOT NULL, TelNo varchar(12) NOT NULL, Conditions varchar(200) NOT NULL, Deleted Number(1)); ```<issue_comment>username_1: Try ``` DateTimeFormatter formatter = DateTimeFormatter.ofPattern("HH:mm:ss.SSSSSSSSS"); ``` The time passed to the parse method has 9 characters for the milliseconds but your format string only specified 6. Upvotes: 0 <issue_comment>username_2: With this format, you don't need a formatter, just do: ``` LocalTime localTime1 = LocalTime.parse(pause); ``` All java.time classes can directly parse a string if it's in ISO8601 format, which is the case: <https://en.m.wikipedia.org/wiki/ISO_8601> Upvotes: 2 <issue_comment>username_3: When parsing the string the Format is not diferent from the value I get with String.valueOf()-Method. I managed to get the minutes like this: ``` String pause = String.valueOf(table.getValueAt(zeile, 4)); if (pause.length()>5) { pause = pause.substring(0,pause.indexOf(':') + 3); } String[] split = pause.split(":"); int hours = Integer.parseInt(split[0]); int minutes = Integer.parseInt(split[1]); int result = hours * 60 + minutes; System.out.println("result: " + result); ``` Is there a more elegant way to do this or is this correct ? Upvotes: 0 <issue_comment>username_4: I suspect that your 00:21:07.480781500 is really the length of your break or pause, so a duration, an amount of time. If so, you are really using the wrong class in your table cell. * A `Duration` as for a duration, an amount of time, like 21 minutes and some seconds. * A `LocalTime` is for a time of day from 00:00:00 through 23:59:59.999999999. If I understand what you mean, getting the total minutes from a duration makes sense, getting them from a `LocalTime` not really. The challenge with a `Duration` is formatting it nicely for the user. However this has been addressed in a number of questions and answers already. So you may find a solution [in this question: How to format a duration in java? (e.g format H:MM:SS)](https://stackoverflow.com/questions/266825/how-to-format-a-duration-in-java-e-g-format-hmmss) and/or search for other questions and answers. I think you need to subclass `DefaultTableCellRenderer` to display the formatted `Duration` in your table. In return getting the minutes is straightforward: ``` Duration pause = (Duration) table.getValueAt(zeile, 4); long result = pause.toMinutes(); System.out.println("result: " + result); ``` This printed: > > result: 21 > > > If you really meant a time of day, you should of course keep your `LocalTime` objects. Then the answer is: ``` LocalTime pause = (LocalTime) table.getValueAt(zeile, 4); int result = pause.get(ChronoField.MINUTE_OF_DAY); System.out.println("result: " + result); ``` The output is the same: > > result: 21 > > > Upvotes: 1 [selected_answer]
2018/03/16
1,064
3,620
<issue_start>username_0: * I am trying to integrate right click menu to the grid header. * when I right click on google link its working. * but where as when I right click on grid header the menu not showing . * can you tell me how to fix it . * providing my code below. <http://jsfiddle.net/c7gbh1e9/> ```here $('body').on('contextmenu', 'a.test', function() { //alert("contextmenu"+event); document.getElementById("rmenu").className = "show"; document.getElementById("rmenu").style.top = mouseY(event); document.getElementById("rmenu").style.left = mouseX(event); document.getElementsByClassName("k-grid-header").className = "show"; document.getElementsByClassName("k-grid-header").style.top = mouseY(event); document.getElementsByClassName("k-grid-header").style.left = mouseX(event); //getElementsByClassName window.event.returnValue = false; }); ```<issue_comment>username_1: Try ``` DateTimeFormatter formatter = DateTimeFormatter.ofPattern("HH:mm:ss.SSSSSSSSS"); ``` The time passed to the parse method has 9 characters for the milliseconds but your format string only specified 6. Upvotes: 0 <issue_comment>username_2: With this format, you don't need a formatter, just do: ``` LocalTime localTime1 = LocalTime.parse(pause); ``` All java.time classes can directly parse a string if it's in ISO8601 format, which is the case: <https://en.m.wikipedia.org/wiki/ISO_8601> Upvotes: 2 <issue_comment>username_3: When parsing the string the Format is not diferent from the value I get with String.valueOf()-Method. I managed to get the minutes like this: ``` String pause = String.valueOf(table.getValueAt(zeile, 4)); if (pause.length()>5) { pause = pause.substring(0,pause.indexOf(':') + 3); } String[] split = pause.split(":"); int hours = Integer.parseInt(split[0]); int minutes = Integer.parseInt(split[1]); int result = hours * 60 + minutes; System.out.println("result: " + result); ``` Is there a more elegant way to do this or is this correct ? Upvotes: 0 <issue_comment>username_4: I suspect that your 00:21:07.480781500 is really the length of your break or pause, so a duration, an amount of time. If so, you are really using the wrong class in your table cell. * A `Duration` as for a duration, an amount of time, like 21 minutes and some seconds. * A `LocalTime` is for a time of day from 00:00:00 through 23:59:59.999999999. If I understand what you mean, getting the total minutes from a duration makes sense, getting them from a `LocalTime` not really. The challenge with a `Duration` is formatting it nicely for the user. However this has been addressed in a number of questions and answers already. So you may find a solution [in this question: How to format a duration in java? (e.g format H:MM:SS)](https://stackoverflow.com/questions/266825/how-to-format-a-duration-in-java-e-g-format-hmmss) and/or search for other questions and answers. I think you need to subclass `DefaultTableCellRenderer` to display the formatted `Duration` in your table. In return getting the minutes is straightforward: ``` Duration pause = (Duration) table.getValueAt(zeile, 4); long result = pause.toMinutes(); System.out.println("result: " + result); ``` This printed: > > result: 21 > > > If you really meant a time of day, you should of course keep your `LocalTime` objects. Then the answer is: ``` LocalTime pause = (LocalTime) table.getValueAt(zeile, 4); int result = pause.get(ChronoField.MINUTE_OF_DAY); System.out.println("result: " + result); ``` The output is the same: > > result: 21 > > > Upvotes: 1 [selected_answer]
2018/03/16
1,585
6,214
<issue_start>username_0: I have a largish base.twig file, and I want to break it up into three includes: header.twig, content.twig, and footer.twig. I'm having trouble getting the block from my child template to override the block included into my parent template and would like to know if it's even possible, and if not, what a Twig-ish solution might look like. I've setup a simple example to illustrate the question. I'm retrieving a Wordpress page and using Timber to process the Twig templates. The PHP template that gets invoked is page-test.php: ``` $context = Timber::get_context(); Timber::render('test_child.twig', $context); ? ``` The Twig template that gets rendered is test\_child.twig: ``` {% extends 'test_base.twig' %} {% block content_main %} Random HTML =========== {% endblock content_main %} ``` The parent template, test\_base.twig is: ``` Twig test {% include 'test\_content.twig' %} ``` And finally, the included template, test\_content.twig, is like this: ``` {% block content\_main %} {% endblock content\_main %} ``` The resulting output looks like this: ``` Twig test ``` As you can see, the has no content. What I expected was for it to contain the `Random HTML ===========` fragment from test\_child.twig. Why isn't the block from test\_child.twig overriding the same-named block included from test\_content.twig into test\_base.twig? And if the approach simply won't work, what's the best Twig-ish way of accomplishing something close?<issue_comment>username_1: This is indeed not possible with `twig`, due to the fact included files have no affinity with the templates who called them. To explain myself have a look at this snippet `{{ include('foo.twig') }}` This snippet will be parsed into `PHP` by the `twig` compiler and the code it compiles into is this `$this->loadTemplate("foo.twig", "main.twig", 6)->display($context);` Now we can investigate this further with looking at the source of `Twig_Template::loadTemplate`. If you have a look at that particular function we will see, that because u are passing a `string` to the function, the function `loadTemplate` will be called in the class `Twig_Environment` In this last function we can cleary see that the `Twig_Environment::loadTemplate` function is not passing any information nor instance of the template you rendered towards the template you are including. The only thing that gets passed (`by value`) is the variable `$context`, which hold all variables you've sent from your controllller to the template you are rendering. I'm guessing one of the main reasons this is coded as such is because included files should be reusable in any situation and should not have dependencies like a (non-existant) block to make them being rendered --- *TwigTemplate.php* ``` protected function loadTemplate($template, $templateName = null, $line = null, $index = null) { try { if (is_array($template)) return $this->env->resolveTemplate($template); if ($template instanceof self) return $template; if ($template instanceof Twig_TemplateWrapper) return $template; return $this->env->loadTemplate($template, $index); } catch (Twig_Error $e) { if (!$e->getSourceContext()) $e->setSourceContext($templateName ? new Twig_Source('', $templateName) : $this->getSourceContext()); if ($e->getTemplateLine()) throw $e; if (!$line) { $e->guess(); } else { $e->setTemplateLine($line); } throw $e; } } ``` --- *Environment.php* ``` public function loadTemplate($name, $index = null) { $cls = $mainCls = $this->getTemplateClass($name); if (null !== $index) { $cls .= '_'.$index; } if (isset($this->loadedTemplates[$cls])) { return $this->loadedTemplates[$cls]; } if (!class_exists($cls, false)) { $key = $this->cache->generateKey($name, $mainCls); if (!$this->isAutoReload() || $this->isTemplateFresh($name, $this->cache->getTimestamp($key))) { $this->cache->load($key); } if (!class_exists($cls, false)) { $source = $this->getLoader()->getSourceContext($name); $content = $this->compileSource($source); $this->cache->write($key, $content); $this->cache->load($key); if (!class_exists($mainCls, false)) { /* Last line of defense if either $this->bcWriteCacheFile was used, * $this->cache is implemented as a no-op or we have a race condition * where the cache was cleared between the above calls to write to and load from * the cache. */ eval('?>'.$content); } if (!class_exists($cls, false)) { throw new Twig_Error_Runtime(sprintf('Failed to load Twig template "%s", index "%s": cache is corrupted.', $name, $index), -1, $source); } } } ``` --- As I'm not sure why you have this set-up,this would be a more `twig`-style setup. Please note you have to define the blocks in your base class first, because "defining" block in an extended class, tries to display the blocks of your parent and does not create a new one. *test\_main.twig* ``` Twig test {% block content\_main %} {% include 'block.twig' %} {% endblock %} ``` *test\_child.twig* ``` {% extends "test_main.twig" %} {% block content_main %} {% include "test_content.twig" %} {% endblock %} ``` *test\_content.twig* ``` Lorem Lipsum ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Unfortunately this does not work with include. I had this issue as well when I was trying to pass some SEO values from my product controller to the base template that included the meta tags. You have to use "extends" for the inner template as well, and point your controller to use the inner template instead of the middle/layout one. You can then define a separate block on your inner template, which can directly override the base template's block. You can see a working example in this Fiddle (Note that the inner template is the main one) <https://twigfiddle.com/1ve5kt> Upvotes: 0
2018/03/16
619
2,467
<issue_start>username_0: This is the file path for my Pydev project in Eclipse: ``` project | +----tests | | | +----subtests | | | | | +----__init__.py | | | | | +----test1.py | | | +----__init__.py | | | +----test2.py | +----mods | +----__init__.py | +----submods1 | +----__init__.py | +----submods2 | +----__init__.py | +----a.py | +----b.py | ... | +----z.py ``` test1 and test2 are exactly the same, all of the init files only have comments in them. The tests are getting the modules from the mods directory and those modules dependencies. When I run test1, all of the modules are found, but test2 always unable to find the same module (let's call it "z.py") in submods2. But somehow it's able to find the rest of the modules. It's not that it's unable to import something in z.py, it just cannot find the file at all. test2: ``` >>> from mods.submod1.submod2 import z exec exp in global_vars, local_vars File "", line 1, in ImportError: cannot import name z >>> from mods.submod1 import submod2 >>> hasattr(submod2, 'z') False ``` The only difference in the `sys.path` during the two tests are the directories that tests are located in, `project/tests/subtests` for test1 and `project/tests` for test2. I cannot figure out why test2 is unable to import z.py but test1 can and test2 can import the rest of the modules.<issue_comment>username_1: To help diagnose the issue, do: from mods.submod1 import submod2 print(submod2) My guess is that it's not the module you're expecting. What Python version are you using? Upvotes: 1 <issue_comment>username_2: I think I found my solution to this. In my `Run Configurations` for test2, the `Working directory` in the `Arguments` tab had a custom path `${workspace_loc:project/tests/}`, I switched it to the default path `${project_loc:/selected project name}` and that seems to be fixing the issue. While I don't understand how this fixed the problem, the result is good enough for me. Upvotes: 1 [selected_answer]
2018/03/16
781
2,785
<issue_start>username_0: I have installed the latest versions of Unity (2017.3.1), Daydream SDKs (GVR SDK for Unity v1.130.0) and followed the steps here: <https://developers.google.com/vr/develop/unity/get-started#configure_build_settings_and_player_settings> However when I press Play I get the following error, ideas on what is the reason & how to fix it? > > Assets/GoogleVR/Editor/GvrBuildProcessor.cs(20,19): error CS0234: The type or namespace name 'iOS' does not exist in the namespace 'UnityEditor'. Are you missing an assembly reference? > > > Edit: Issue fixed after a few hours of submission on GitHub with the release of GVR SDK for Unity v1.130.1<issue_comment>username_1: I met this issue When useing unity-arkit-plugin to develop something, i did nothing but import this plugin it throw this error . And after i installed UnitySetup-IOS-Support-for-Editor-2017.3.0f3.pkg ,the issue fixed. Upvotes: 0 <issue_comment>username_2: The Official Solution: <https://forum.unity.com/threads/unityeditor-ios-xcode-ios-namespace-doesnt-exist.365381/> Step by step I did: Figure 1 shows the Unity Download Assistent installation file that should be run. [![Figure 1 - Unity Download Assistant](https://i.stack.imgur.com/M5oxq.png)](https://i.stack.imgur.com/M5oxq.png) Figure 2 shows the "iOS Build Support" option that should be checked for installation. [![Figure 2 - Unity Install iOS Build Support](https://i.stack.imgur.com/8fTA5.png)](https://i.stack.imgur.com/8fTA5.png) Upvotes: 2 <issue_comment>username_3: I went ahead and changed the "Current Platform" to "iOS" in Unity Hub, This is a quick fix and it works now. **Change "Current Platform" to iOS Image**: ![Change ](https://i.stack.imgur.com/cjISi.jpg) Upvotes: 2 <issue_comment>username_4: If you're facing with this problem there are 2 steps to solve it: 1. Make sure that "**iOS Build Support**" is checked in **Unity Hub -> Installs** as *Adam* mentioned (if U're developing for iOS platform then this option most definitely is already checked). 2. Place your script in **Assets -> Editor** folder. *UnityEditor.iOS.Xcode* namespace cannot be used in your main project. It works only in special *Assembly-CSharp-Editor* project intended to modify native Xcode project. (That's the option that worked for me after searching a ton of forums). Upvotes: 4 <issue_comment>username_5: If you have installed IOS build through unity hub while the project is open, try restarting Unity Hub and Unity Editor. This solved my issue. Upvotes: 1 <issue_comment>username_6: This usually happens when **iOS Build Support** is not installed in **Unity Hub** Upvotes: 0 <issue_comment>username_7: If IOS support is installed through Unity Hub after the editor is opened save the project, close and reopen. Upvotes: 0
2018/03/16
3,312
12,133
<issue_start>username_0: I want to create a responsive, mobile optimized reading experience similar to an epub/ebook reader, like the Kindle app, or iBooks, using dynamic html as the source. Imagine a long article or blog post that requires a lot of vertical scrolling to read, especially on a small mobile device. What I would like to do is break the long page into multiple full-screen sections, allowing the user to use left/right navigation arrows and/or the swipe gesture to "page" through the article. There are many JS libraries available that can create a "slide show" or "carrousel" of pre-defined slides (using divs or other container elements). But I want the text and html content to dynamically re-flow to fit any device viewport and still be readable... just like an epub/ebook user interface, like the Kindle app or iBooks. So, for the same article, there would be many more "pages" on a phone than there would be on a tablet or desktop viewport, and those "pages" would need to be dynamically created/adjusted if/when the viewport size changes (like switching from portrait to landscape on a mobile device). Here is an example of a javascript .epub reader: [epub.js](http://futurepress.github.io/epub.js/reader/#epubcfi(/6/16[xchapter_002]!/4/2/2/2/2[c002p0000]/1:0)) ... notice the responsive behavior. When you resize your viewport, all the text re-flows to fit the available space, increasing or decreasing the total number of "pages". The problem is that epub.js requires an .epub file as its source. What I want is the same user interface and functionality for an html page. I have searched and searched for some kind of library that can do this out of the box, but haven't been able to find anything. I realize that I could use a conversion script to convert my html page into an .epub file, and then use epub.js to render that file within the browser, but that seems very round-about and clunky. It would be so much better to mimic or simulate the .epub reader user experience with html as the direct source, rendering/mimicking a client side responsive ebook user experience. Does anyone know if something like this already exists, or how I could go about building it myself? The crucial functionality is the dynamic/responsive text-reflow. When the viewport dimensions are reduced, the text/content needs to reflow to the next "page" to avoid any need for vertical scrolling. I don't know how to do this efficiently. If I were to code it myself, I might use something like the jQuery Columnize plugin, setting all columns to `width: 100vw; height: 100vh`, so that each column is like a "page", and then figuring out how to create a swipe UI between those "pages". Any help is much appreciated!<issue_comment>username_1: Take a look at [this](https://github.com/alvarotrigo/fullpage.js) repository on GitHub. Otherwise, you can create a one-page website with many sections, each one as high as the viewport, by using only [CSS](https://medium.com/@ckor/make-full-screen-sections-with-1-line-of-css-b82227c75cbd) ([demo](https://codepen.io/ckor/pen/lBnxh)): ``` .section { height: 100vh; } ``` or by using [JavaScript](https://stackoverflow.com/questions/1248081/get-the-browser-viewport-dimensions-with-javascript), adding an anchor to each section to navigate between them, and applying a responsive unit (my [demo](https://codepen.io/ChemBioScripting/details/KBBMLM/)) for the text of each section, to [adapt](https://css-tricks.com/snippets/css/fluid-typography/) it on resize... Something like this: ```js var curr_el_index = 0; var els_length = $(".container").length; $(".next_section").on("click", function(e) { curr_el_index++; if (curr_el_index >= els_length) { curr_el_index = 0; } $("html, body").animate({ scrollTop: $(".container").eq(curr_el_index).offset().top }, 300); return false; }); $(".previous_section").on("click", function(e) { curr_el_index--; if (curr_el_index < 0) { curr_el_index = els_length - 1; } $("html, body").animate({ scrollTop: $(".container").eq(curr_el_index).offset().top }, 300); return false; }); ``` ```css * { border: 0; margin: 0; padding: 0; box-sizing: border-box; -moz-box-sizing: border-box; -webkit-box-sizing: border-box; } body { background-color: #1a1a1a; } section { height: 100vh; background-color: #eee; border: 2px solid red; font-size: 6vw; } ``` ```html Section 1 [Previous](#) [Next](#) Section 2 [Previous](#) [Next](#) Section 3 [Previous](#) [Next](#) Section 4 [Previous](#) [Next](#) Section 5 [Previous](#) [Next](#) ``` **EDIT #1** An idea of algorithm, that come from a [my codepen](https://codepen.io/ChemBioScripting/details/QyBoQV), that uses the same *jQuery plugin*: 1. Create your reader layout, copying the whole text in it 2. Use [this](https://github.com/customd/jquery-visible) jQuery plugin to check the text inside the viewport ([demo](http://opensource.teamdf.com/visible/examples/demo-basic.html)) 3. Count the number of characters/WORDS with "*`Onscreen`*" label in the viewport (a [references](https://stackoverflow.com/questions/18144868/regex-match-count-of-characters-that-are-separated-by-non-matching-characters)) 4. Split the whole text in a `list` containing as many characters/WORDS as there are in the "*`Onscreen`*" label 5. Create a `section` for each element of the obtained `list`, filling each `section` with the relative text; the number of elements of the `list` gives you the number of pages (`sections`) of the whole text. You may navigate between `sections` like above 6. On `resize` event, redo [2-5] algorithm steps Cheers Upvotes: 2 <issue_comment>username_2: You can try CSS scroll snap points on text with columns <https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Scroll_Snap_Points> Somehow make the columns as wide as the viewport, and allow horizontal snapped scrolling. **Update** I mean to try to do the text flowing entirely using css. Pen: <https://codepen.io/ericc3141/pen/RYZEpr> ``` body { scroll-snap-type: mandatory; scroll-snap-points-x: repeat(100%); } #columns-test { height: 80vh; columns: 90vw auto; } ``` Upvotes: -1 <issue_comment>username_3: The idea is to have a div that will contain the whole text (let's call this div `#epub_container`). Then, you will have a div with the same size of the page viewport (let's call it `#displayer`) and it will contain `#epub_container`. `#displayer` will have css `overflow:hidden`. So when the site loads, it will only show the first page, because the rest of the `#epub_container` will be hidden. Then you need a page navigator to increment/decrement the page number. When the page number changes, we will move the top offset of the `#epub_container` based on that. This is the jQuery function: ``` function move_to_page() { var height = window.innerHeight; var width = window.innerWidth; var $displayer = $('#displayer'); var offset = $displayer.offset(); $displayer.height(height - offset.top - 5); var $epub = $('#epub_container'); var offset_top = offset.top - $displayer.height() * m_page; $epub.offset({top: offset_top, left: offset.left}); } ``` [JSFiddle](https://jsfiddle.net/zy4tae68/) EDIT: call `move_to_page()` after the text reflow in order to recompute the pages. [![Image](https://i.stack.imgur.com/ZqNaS.png)](https://i.stack.imgur.com/ZqNaS.png) Upvotes: 1 <issue_comment>username_4: All you need is to insert a page break at the right places when you load the page. You can take the cue from here: [Dynamic Page-Break - Jquery](https://stackoverflow.com/questions/21241240/dynamic-page-break-jquery) Here you can set the page height to your viewport height. You can handle the rest with javascript and css. Upvotes: -1 <issue_comment>username_5: This becomes very difficult if the html page is complex, eg with precisely positioned elements or images. However if (as in the epub.js example) the content consists only of headings and paragraphs, it is achievable. The basic idea is to progressively add content until just before the page overflows. By keeping track of where we start and stop adding content, clicking to the next page is a case of changing the page start to the previous page end (or vice versa if you're going back). Process for reshaping content into pages ---------------------------------------- Let's assume you have all your content in one long string. Begin by splitting all the content into an array of words and tags. It's not as easy as splitting by whitespace as whitespace between `<` and `>` should be ignored (you want to keep classnames etc within each tag). Also tags should be separated as well, even if there is no whitespace between the tag and a word. Next you need a function that checks if an element's contents overflow the element. [This question](https://stackoverflow.com/questions/143815/determine-if-an-html-elements-content-overflows) has a copy-paste solution. You need two variables, `pageStart` and `pageEnd`, to keep track of what indexes in the array are the beginning and end of the current page. Beginning at the index in `pageStart` you add elements from the array as content to the page, checking after each add whether or not the contents overflow. When they do overflow you take the index you're up to, minus 1, as the index for `pageEnd`. Keeping tags across page breaks ------------------------------- Now if all's ticketyboo then this should fill the page pretty well. When you want to go to the next page set your new `pageStart` as `pageEnd + 1` and repeat the process. However there are some issues that you may want to fix. Firstly, what happens if the page overflows in the middle of a paragraph? Strictly speaking the closing tag, , is not required in HTML, so we don't need to worry about it. But what about the start of the next page? It will be missing an opening tag and that is a major problem. So we have make sure we check if the page's content begins with a tag, and if it doesn't then we get the closest opening tag prior to the current `pageStart` (just step back along the array from `pageStart`) and add it in before the rest of the content. Secondly, as shown in the example, if a paragraph continues onto the next page, the last line of the current page is still justified. You need to check if `pageEnd` is in the middle of a paragraph and if so add `syle="text-align-last:justify;"` to the opening tag of that paragraph. Example implementation ---------------------- A pen showing all this in action is at <https://codepen.io/anon/pen/ZMJMZZ> The HTML page contains all content in one long element. The content is taken directly from the container `#page` and reformed into pages, depending on the size of `#page`. I have't implemented justifying the last line if a page break occurs within a paragraph. Resize the `#page` element in the css and see how the content resizes itself - note that since the page size is fixed you'll have to use click forward and back to trigger a recalculation. Once you bind the page size to the window size, recalculating pages on the fly simply involves adding a resize event listener to the window that calls `fillPage`. No doubt there are numerous bugs, indeed it will sometimes display things incorrectly (eg skipping or repeating words at the beginning or end of a page), but this should give you an idea of where to start. Upvotes: 4 [selected_answer]<issue_comment>username_6: I created a plugin that handles this perfectly. It has features like dark mode, font changing, line height adjustment, select chapter in a side nav menu, save and restore scrolling/reading position. You can find it for free on git hub at <https://github.com/E-TechDev/Html-Book-Reader> **Screenshots** [Light Mode](https://i.stack.imgur.com/LmkLy.jpg) [Dark Mode](https://i.stack.imgur.com/spCbS.jpg) [Side Nav Menu](https://i.stack.imgur.com/k7Nl2.jpg) [Change Font](https://i.stack.imgur.com/SvyKE.jpg) [Adjust Paragraph](https://i.stack.imgur.com/i2k81.jpg) Upvotes: 0
2018/03/16
217
765
<issue_start>username_0: Im trying to check the eloquent queries to check for n+1 problems. Im using this "Event::listen('illuminate.query', function($sql){ var\_dump($sql); });" on top of web.php routes file but no query appears. It seems that is because of the version of laravel, that is laravel 5. Do you know in this version how to properly check the eloquent queries?<issue_comment>username_1: Add this to `AppServiceProvider::boot()` ([documentation](https://laravel.com/docs/5.5/database#listening-for-query-events)): ``` DB::listen(function ($query) { var_dump($query); }); ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: try this library dude <https://github.com/barryvdh/laravel-debugbar> , you will know all of your query Upvotes: 0
2018/03/16
1,956
7,202
<issue_start>username_0: I have a code like the foolowing: ``` def render(): loop = asyncio.get_event_loop() async def test(): await asyncio.sleep(2) print("hi") return 200 if loop.is_running(): result = asyncio.ensure_future(test()) else: result = loop.run_until_complete(test()) ``` When the `loop` is not running is quite easy, just use `loop.run_until_complete` and it return the coro result but if the loop is already running (my blocking code running in app which is already running the loop) I cannot use `loop.run_until_complete` since it will raise an exception; when I call `asyncio.ensure_future` the task gets scheduled and run, but I want to wait there for the result, does anybody knows how to do this? Docs are not very clear how to do this. I tried passing a `concurrent.futures.Future` calling `set_result` inside the coro and then calling `Future.result()` on my blocking code, but it doesn't work, it blocks there and do not let anything else to run. ANy help would be appreciated.<issue_comment>username_1: To implement `runner` with the proposed design, you would need a way to single-step the event loop from a callback running inside it. Asyncio [explicitly forbids](https://bugs.python.org/msg225883) recursive event loops, so this approach is a dead end. Given that constraint, you have two options: 1. make `render()` itself a coroutine; 2. execute `render()` (and its callers) in a thread different than the thread that runs the asyncio event loop. Assuming #1 is out of the question, you can implement the #2 variant of `render()` like this: ``` def render(): loop = _event_loop # can't call get_event_loop() async def test(): await asyncio.sleep(2) print("hi") return 200 future = asyncio.run_coroutine_threadsafe(test(), loop) result = future.result() ``` Note that you cannot use `asyncio.get_event_loop()` in `render` because the event loop is not (and should not be) set for that thread. Instead, the code that spawns the runner thread must call `asyncio.get_event_loop()` and send it to the thread, or just leave it in a global variable or a shared structure. Upvotes: 5 [selected_answer]<issue_comment>username_2: **Waiting Synchronously for an Asynchronous Coroutine** If an asyncio event loop is already running by calling `loop.run_forever`, it will *block* the executing thread until `loop.stop` is called [see the [docs](https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.loop.run_forever)]. Therefore, the only way for a synchronous wait is to run the event loop on a dedicated thread, schedule the *asynchronous* function on the loop and wait for it *synchronously* from *another* thread. For this I have composed my own minimal solution following the [answer](https://stackoverflow.com/a/49333864) by username_1. I have also added the parts for cleaning up the loop when all work is finished [see [`loop.close`](https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.loop.close)]. The `main` function in the code below runs the event loop on a dedicated thread, schedules several tasks on the event loop, plus the task the result of which is to be awaited *synchronously*. The synchronous wait will block until the desired result is ready. Finally, the loop is closed and cleaned up gracefully along with its thread. The dedicated thread and the functions `stop_loop`, `run_forever_safe`, and `await_sync` can be encapsulated in a module or a class. For thread-safery considerations, see section “[Concurrency and Multithreading](https://docs.python.org/3/library/asyncio-dev.html#asyncio-multithreading)” in asyncio docs. ``` import asyncio import threading #---------------------------------------- def stop_loop(loop): ''' stops an event loop ''' loop.stop() print (".: LOOP STOPPED:", loop.is_running()) def run_forever_safe(loop): ''' run a loop for ever and clean up after being stopped ''' loop.run_forever() # NOTE: loop.run_forever returns after calling loop.stop #-- cancell all tasks and close the loop gracefully print(".: CLOSING LOOP...") # source: loop\_tasks\_all = asyncio.Task.all\_tasks(loop=loop) for task in loop\_tasks\_all: task.cancel() # NOTE: `cancel` does not guarantee that the Task will be cancelled for task in loop\_tasks\_all: if not (task.done() or task.cancelled()): try: # wait for task cancellations loop.run\_until\_complete(task) except asyncio.CancelledError: pass #END for print(".: ALL TASKS CANCELLED.") loop.close() print(".: LOOP CLOSED:", loop.is\_closed()) def await\_sync(task): ''' synchronously waits for a task ''' while not task.done(): pass print(".: AWAITED TASK DONE") return task.result() #---------------------------------------- async def asyncTask(loop, k): ''' asynchronous task ''' print("--start async task %s" % k) await asyncio.sleep(3, loop=loop) print("--end async task %s." % k) key = "KEY#%s" % k return key def main(): loop = asyncio.new\_event\_loop() # construct a new event loop #-- closures for running and stopping the event-loop run\_loop\_forever = lambda: run\_forever\_safe(loop) close\_loop\_safe = lambda: loop.call\_soon\_threadsafe(stop\_loop, loop) #-- make dedicated thread for running the event loop thread = threading.Thread(target=run\_loop\_forever) #-- add some tasks along with my particular task myTask = asyncio.run\_coroutine\_threadsafe(asyncTask(loop, 100200300), loop=loop) otherTasks = [asyncio.run\_coroutine\_threadsafe(asyncTask(loop, i), loop=loop) for i in range(1, 10)] #-- begin the thread to run the event-loop print(".: EVENT-LOOP THREAD START") thread.start() #-- \_synchronously\_ wait for the result of my task result = await\_sync(myTask) # blocks until task is done print("\* final result of my task:", result) #... do lots of work ... print("\*\*\* ALL WORK DONE \*\*\*") #======================================== # close the loop gracefully when everything is finished close\_loop\_safe() thread.join() #---------------------------------------- main() ``` Upvotes: 3 <issue_comment>username_3: here is my case, my whole programe is async, but call some sync lib, then callback to my async func. follow the answer by username_1. ```py import asyncio async def asyncTask(k): ''' asynchronous task ''' print("--start async task %s" % k) # await asyncio.sleep(3, loop=loop) await asyncio.sleep(3) print("--end async task %s." % k) key = "KEY#%s" % k return key def my_callback(): print("here i want to call my async func!") future = asyncio.run_coroutine_threadsafe(asyncTask(1), LOOP) return future.result() def sync_third_lib(cb): print("here will call back to your code...") cb() async def main(): print("main start...") print("call sync third lib ...") await asyncio.to_thread(sync_third_lib, my_callback) # await loop.run_in_executor(None, func=sync_third_lib) print("another work...keep async...") await asyncio.sleep(2) print("done!") LOOP = asyncio.get_event_loop() LOOP.run_until_complete(main()) ``` Upvotes: 2
2018/03/16
818
2,318
<issue_start>username_0: I have a navigator structure like so: `stack drawer stack tab` My hierarchy from there is: ``` view view flatlist ``` I'm trying to get my flatlist to scroll downward. You can see the finger animation but the list isn't moving. You can see it in this gif: [Screen Recording 2018-03-16 at 11.30 p.m..gif](https://d3vv6lp55qjaqc.cloudfront.net/items/2l443Q32171S0e1R3x1g/Screen%20Recording%202018-03-16%20at%2011.30%20p.m..gif?X-CloudApp-Visitor-Id=2852073&v=6217454a) Here's my code: ``` it('should have infinite scrolling', async () => { await expect(element(by.id('NewsFeed.Scroller'))).toBeVisible(); await expect(element(by.id('NewsFeedScreen.ArticleListing-0'))).toExist(); await expect(element(by.id('NewsFeedScreen.ArticleListing-10'))).toNotExist(); await element(by.id('NewsFeed.Scroller')).scroll(10000, 'down'); await expect(element(by.id('NewsFeedScreen.ArticleListing-10'))).toExist(); }); ``` I believe the issue is that scroll action begins at the bottom of my screen. When I attempt to start a scroll form there myself it does not work either. I'm not seeing anything in the API to allow me to put an offset on where that gesture begins. Looking that element in the inspect reveals that its not in the area which Detox begins its gesture: <https://d3vv6lp55qjaqc.cloudfront.net/items/323C3D3U3y1Y2Z1B2L2J/Screen%20Shot%202018-03-16%20at%2023.47.48.png?X-CloudApp-Visitor-Id=2852073&v=31521c3c><issue_comment>username_1: I've found a solution which is good enough while we wait for <https://github.com/wix/detox/issues/589> to be resolved. `await element(by.id('NewsFeedScreen.ArticleListing-0')).swipe('up', 'fast', 0.9);` <https://github.com/wix/detox/blob/master/docs/APIRef.ActionsOnElement.md#swipedirection-speed-percentage> Results in the behaviour I'm looking for, scrolling down in my list Upvotes: 4 [selected_answer]<issue_comment>username_2: If anyone faces this issue now -- try using the `startPositionX` or `startPositionY` parameters of the `.scroll()` method, e.g: `await element(by.id('scrollView')).scroll(200, 'down', NaN, 0.5)` Worked like a charm for me, when I faced the same problem. <https://github.com/wix/Detox/blob/master/docs/APIRef.ActionsOnElement.md#scrolloffset-direction-startpositionx-startpositiony> Upvotes: 1
2018/03/16
1,549
6,353
<issue_start>username_0: **tl;dr** How can I exclude a class from being included in the jar file generated by `bootJar`, depending on a Spring profile? --- *I'm going to intentionally be vague, but will try to provide as much info as possible.* I have a C# .NET web api (JSON over HTTP/1.1). This api receives an encrypted payload from a third-party. This third-party has a library that is written for Java, but not C#. Due to time constraints and other factors, we have decided that it would be quicker to stand up a Java API with Spring Boot, and make calls to that API, rather than implementing their library ourselves. The flow is 1. Send encrypted payload to C# API 2. (C#) Send encrypted payload to Java API 3. (Java) Decrypt encrypted payload 4. (Java) Do data transformation/verification using third-party library 5. (Java) Re-encrypt payload 6. (Java) Return encrypted payload to C# API 7. (C#) Send to a different third-party We have a suite of automated integration tests that are currently written in C#. We need to make sure that, given a valid payload, when it's sent to the C# API, the payload gets forwarded to the Java API, gets decrypted properly, gets transformed properly, and gets re-encrypted properly for the other third-party to handle. In order to facilitate this, I have created a new controller on the Java API that takes in the plaintext payload, encrypts it (just like the third-party would) and returns it to the caller. The caller can then send that encrypted payload do a different controller on the Java API to do the decryption+validation, and send the re-encrypted payload back to the caller. The Java API trusts these automation-created payloads. The Java API generates a new key pair at runtime, and when it receives the plaintext payload, it signs it with the runtime-generated private key, and trusts messages signed with the runtime-generated public key. The problem I have now, is that **I don't want this functionality to be available in production.** The contents of these payloads are too sensitive for bad actors to be able to manipulate. I have solved part of the problem by adding the `@Profile("test")` attribute to the controller. This allows me to configure it at runtime like ``` java -jar sensitive-app.jar --spring.profiles.active=production ``` That way, at runtime, spring prevents the controller from working ``` { "timestamp": "2018-03-16T23:20:00.781+0000", "status": 404, "error": "Not Found", "message": "No message available", "path": "/dangerous-endpoint" } ``` However, I understand that nothing is unhackable. As much as possible, I'd like to ensure that this can't be hit by any bad actors that get access to the box; or put differently, even if the attacker gets access to the box, and starts the application with ``` java -jar sensitive-app.jar --spring.profiles.active=test ``` The controller still won't be functional. I am building this jar using the spring-boot gradle plugin. The only way I could think about doing this is preventing the controller from being included in the jar in the first place. I tried adding this to my build script ``` bootJar { exclude('sensitive/dangerous/**') } ``` but I see the class file still included in the jar. Then I wondered if there was some way to have the spring plugin intelligently include or exclude files based on a property, like ``` gradlew clean build bootJar -Dspring.build.profile=production ``` but couldn't find anything. I also tried to exclude the file in the sourceset ``` sourceSets { main { java { if (project.environment == 'prod') { exclude '**/dangerous/**' } } } } ``` But the class file still ended up in the jar file (under the `BOOT-INF/classes` folder) **Note:** This controller needs to be available in all non-production environments, so preventing compilation of this file outright is not an option. How can I exclude a class from being included in the jar file generated by `bootJar`; or is there a way that I can tell gradle to build the files that would be included in a Spring profile?<issue_comment>username_1: I would take a different tack. I would write the dangerous controller so that it looked for something very specific in the JVM environment (e.g. a custom system property or environment variable). * Have the controller test the variable and throw in exception if it detects that it is being used in production. Fail loudly, and early! Possibly put this into a static initializer so that the failure occurs if the controller is loaded / wired. * Arrange that the property / variable / whatever is set in a way that cannot be accidentally copied to production by your chosen build / deploy methodology. (There are various ways to do this ... but you have been intentionally vague about the context which makes it hard to suggest an appropriate one.) This approach has the advantage that it should be immune from someone interfering with the build script. But it would be advisable to figure out to exclude the controller via the build script **as well**. (Belt and braces ...) Upvotes: 1 <issue_comment>username_2: You can do this with gradle sourceSets. ### Add this to your gradle.build: ``` def env = project.hasProperty('env') ? project.env : 'development' sourceSets { main { java { srcDir 'src' if ( env == 'production' ) { println "DANGEROUS source set is being used for project complie (env = $env)!" } else { println "SAFE source set is being used for project complie (env = $env)." exclude '**/dangerous/**' } } } } ``` ### You can pass in a project property with "-P". To compile with dangerous Java code call: ``` gradle clean build bootJar -Penv=production ``` ### The default build will not include dangerous Java code. ### Note If you are using an IDE, it many not respect your declared source set. You will have to use the gradle IDE (eclipse, idea) plugins and add the same logic. ### CI Stuff like this can usually be passed in via your CI job when deploying to various environments. Additionally, you could load your application.properties file into gradle or use a system property to determine the environment. Upvotes: 0
2018/03/16
850
3,014
<issue_start>username_0: I upgraded laravel spark to version six and connected to the github repo on the site. Everything seemed to go OK on the npm install command, but when I ran the npm run dev, I got a slew of errors that basically told me that things related to spark were not found. Seems to be related to the new mix stuff? Do I need a new github token? ``` These dependencies were not found: * auth/register-braintree in ./resources/assets/js/spark-components/auth/register-braintree.js * auth/register-stripe in ./resources/assets/js/spark-components/auth/register-stripe.js * kiosk/add-discount in ./resources/assets/js/spark-components/kiosk/add-discount.js .... ``` It suggested that I install these dependencies but running those commands resulting in an error that seems like I am being denied access to the github repo. ``` Error while executing: npm ERR! /usr/bin/git ls-remote -h -t ssh://git@github.com/auth/register-braintree.git npm ERR! npm ERR! Warning: Permanently added the RSA host key for IP address '192.168.127.12' to the list of known hosts. npm ERR! Permission denied (publickey). npm ERR! fatal: Could not read from remote repository. npm ERR! npm ERR! Please make sure you have the correct access rights npm ERR! and the repository exists. ``` I bet that there is a connection on github somewhere that I am missing. I connected to the repo on the site ...<issue_comment>username_1: > > Please make sure you have the correct access rights > and the repository exists. > > > So either [github.com/auth/register-braintree](https://github.com/auth/register-braintree) is a private repo, and you need to have your account added as a collaborator, even just for accessing/cloning said repo. Or the URL is not the correct one at all. A public repo can be accessed/cloned by ssh or https without authentication (so no token at all). Check if this is one of the [repos from the braintree GitHub organization](https://github.com/braintree?utf8=%E2%9C%93&q=&type=&language=). Check also if [spatie/laravel-mix-purgecss issue 20](https://github.com/spatie/laravel-mix-purgecss/issues/20) applies here: > > I found the problem: > > My `package.json` file was referring to "`laravel-mix": "1.*`". > > I changed to "`laravel-mix": "2.*`" and all ran fine. > > I should have read the installation guide more closely. > > > Upvotes: 1 <issue_comment>username_2: There were several problems that appeared to combine to cause this error. The first was that version 6 of Laravel spark went to a naming structure and the base path in the web pack needed to be changed from ... ``` path.resolve(__dirname, 'vendor/laravel/spark/resources/assets/js'), ``` ... to ... ``` path.resolve(__dirname, 'vendor/laravel/spark-aurelius/resources/assets/js'), ``` The second error was that the file-loader package seems to have been needed. I am not totally sure that this was necessary for this particular fix, but you should be aware that it seems to be needed. Upvotes: 2