date
stringlengths
10
10
nb_tokens
int64
60
629k
text_size
int64
234
1.02M
content
stringlengths
234
1.02M
2018/03/16
2,271
7,917
<issue_start>username_0: Is there any way to detect IE browser with React and either redirect to a page or give any helpful message. I found something in JavaScript, but not sure how would I use it with React+TypeScript. `var isEdge = !isIE && !!window.StyleMedia;`<issue_comment>username_1: This is the service I always use when doing JS/Browser based browser-detection: <http://is.js.org/> ``` if (is.ie() || is.edge()) { window.location.href = 'http://example.com'; } ``` Upvotes: 3 <issue_comment>username_2: You are on the right track you can use these to conditionally render jsx or help with routing... I have used the following with great success. Originally from - [How to detect Safari, Chrome, IE, Firefox and Opera browser?](https://stackoverflow.com/questions/9847580/how-to-detect-safari-chrome-ie-firefox-and-opera-browser) ``` // Opera 8.0+ const isOpera = (!!window.opr && !!opr.addons) || !!window.opera || navigator.userAgent.indexOf(' OPR/') >= 0; // Firefox 1.0+ const isFirefox = typeof InstallTrigger !== 'undefined'; // Safari 3.0+ "[object HTMLElementConstructor]" const isSafari = /constructor/i.test(window.HTMLElement) || (function (p) { return p.toString() === "[object SafariRemoteNotification]"; })(!window['safari'] || (typeof safari !== 'undefined' && safari.pushNotification)); // Internet Explorer 6-11 const isIE = /*@cc_on!@*/false || !!document.documentMode; // Edge 20+ const isEdge = !isIE && !!window.StyleMedia; // Chrome 1 - 71 const isChrome = !!window.chrome && (!!window.chrome.webstore || !!window.chrome.runtime); // Blink engine detection const isBlink = (isChrome || isOpera) && !!window.CSS; ``` Please be aware they each stand a chance to deprecated due to browser changes. I use them in React like this: ``` content(props){ if(!isChrome){ return ( ) } else { return ( ) } } ``` Then by calling {this.Content()} in my main component to render the different browser specific elements. Pseudo code might look something like this... (untested): ``` import React from 'react'; const isChrome = !!window.chrome && (!!window.chrome.webstore || !!window.chrome.runtime); export default class Test extends React.Component { content(){ if(isChrome){ return ( Chrome ) } else { return ( Not Chrome ) } } render() { return ( Content to be seen on all browsers {this.content()} ) } } ``` Upvotes: 7 [selected_answer]<issue_comment>username_3: Try: ```js const isEdge = window.navigator.userAgent.indexOf('Edge') != -1 const isIE = window.navigator.userAgent.indexOf('Trident') != -1 && !isEdge ``` etc. Each browser has a distinct user agent you can check. These can be faked by the client of course, but in my opinion, are a more reliable long term solution. Upvotes: 1 <issue_comment>username_4: You can write test for IE like this. ``` // Internet Explorer 6-11 const isIE = document.documentMode; if (isIE){ window.alert( "Your MESSAGE here." ) } ``` Upvotes: 1 <issue_comment>username_5: This is all information you can get from your the browser of you client (using react): ``` let latitude let longitude const location = window.navigator && window.navigator.geolocation if (location) { location.getCurrentPosition(position => { latitude = position.coords.latitude longitude = position.coords.longitude }) } var info = { timeOpened: new Date(), timezone: new Date().getTimezoneOffset() / 60, pageon: window.location.pathname, referrer: document.referrer, previousSites: window.history.length, browserName: window.navigator.appName, browserEngine: window.navigator.product, browserVersion1a: window.navigator.appVersion, browserVersion1b: navigator.userAgent, browserLanguage: navigator.language, browserOnline: navigator.onLine, browserPlatform: navigator.platform, javaEnabled: navigator.javaEnabled(), dataCookiesEnabled: navigator.cookieEnabled, dataCookies1: document.cookie, dataCookies2: decodeURIComponent(document.cookie.split(';')), dataStorage: localStorage, sizeScreenW: window.screen.width, sizeScreenH: window.screen.height, sizeDocW: window.document.width, sizeDocH: window.document.height, sizeInW: window.innerWidth, sizeInH: window.innerHeight, sizeAvailW: window.screen.availWidth, sizeAvailH: window.screen.availHeight, scrColorDepth: window.screen.colorDepth, scrPixelDepth: window.screen.pixelDepth, latitude, longitude } console.log(info) ``` The browser is `browserName` Upvotes: -1 <issue_comment>username_6: This almost broke me, but I found something which seems pretty simple and straight forward, use the vendor name. ie. Google, Apple etc. `navigator.vendor.includes('Apple')` I hope this helps someone out there. Upvotes: 1 <issue_comment>username_7: Not sure why but nobody mentioned this package: [react-device-detect](https://github.com/duskload/react-device-detect) The package have a lot browsers checks, plus versions and some other info related. It's really small and it's updated. You can use: ``` import { isIE } from 'react-device-detect'; isIE // returns true or false ``` react-device-detect it's also very small [bundlephobia link](https://bundlephobia.com/result?p=react-device-detect@1.13.1) Upvotes: 5 <issue_comment>username_8: I was using Gatsby for our React site and build was giving me trouble with the accepted answer, so I ended up using a `useEffect` on load to be able to not render for IE at a minimum: ``` const [isIE, setIsIE] = React.useState(false); React.useEffect(() => { console.log(`UA: ${window.navigator.userAgent}`); var msie = window.navigator.userAgent.indexOf("MSIE "); setIsIE(msie > 0) }, []); if(isIE) { return <> } // In my component render if(isIE) { return <> } ``` Got the idea originally from: <https://medium.com/react-review/how-to-create-a-custom-usedevicedetect-react-hook-f5a1bfe64599> and [Check if user is using IE](https://stackoverflow.com/questions/19999388/check-if-user-is-using-ie) Upvotes: 2 <issue_comment>username_9: You can try this: ```js navigator.browserDetection= (function(){ var ua= navigator.userAgent, tem, M= ua.match(/(opera|chrome|safari|firefox|msie|trident(?=\/))\/?\s*(\d+)/i) || []; if(/trident/i.test(M[1])){ tem= /\brv[ :]+(\d+)/g.exec(ua) || []; return 'IE '+(tem[1] || ''); } if(M[1]=== 'Chrome'){ tem= ua.match(/\b(OPR|Edge)\/(\d+)/); if(tem!= null) return tem.slice(1).join(' ').replace('OPR', 'Opera'); } M= M[2]? [M[1], M[2]]: [navigator.appName, navigator.appVersion, '-?']; if((tem= ua.match(/version\/(\d+)/i))!= null) M.splice(1, 1, tem[1]); return M.join(' '); })(); console.log(navigator.browserDetection); // outputs: `Chrome 92` ``` Upvotes: 0 <issue_comment>username_10: There is a new package that takes care of this for React: <https://www.npmjs.com/package/react-browser-navigator> This is how you can use it: ```js // import the module import useNavigator from "react-browser-navigator"; function App() { // importing the property let { userAgent } = useNavigator(); // you can use it within the useEffect hook OR simply print the // string into the return statement useEffect(() => { if (!isNull(userAgent)) { // printing out the entire object console.log("userAgent", userAgent); } }, [userAgent]); return ( userAgent: {userAgent} ); } ``` Essentially, the output will be something like this: ``` Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36 ``` Upvotes: 0
2018/03/16
483
1,538
<issue_start>username_0: I'm working on a project and I need to select a different .txt every time based on the input. This is what I have: ``` #include #include #include #include #include #include using namespace std; int main() { string hp, att, def, vel, spec; string answer, monster; do { cout << "Which Monster?: "; cin >> monster; cout << endl; ifstream selection; selection.open(monster+".txt"); selection.close(); cout << endl << "Again? "; cin >> answer; } while (answer == "y"); cout << "Hello world!" << endl; return 0; } ``` I have to get the monster string and search the .txt with the same name. If I type "Troll" it will search for the Troll.txt Is there a way? This is the error I get: F:\GdR\Campagna 1\CalcoloStats\main.cpp|22|error: no matching function for call to 'std::basic\_ifstream::open(std::\_\_cxx11::basic\_string)'|<issue_comment>username_1: Given that `monster` is a `std::string`, this expression: ``` monster + ".txt" ``` is also a `std::string`. Since C++11, you can use this as an argument to `ifstream`'s `open` function just fine. However, until then, you are stuck with a limitation of `ifstream` which is that it can only take a C-style string. Fortunately, you can get a C-style string from a `std::string` using the `c_str()` member function. So, either: ``` selection.open((monster + "txt").c_str()); ``` Or get a modern compiler / switch out of legacy mode. Upvotes: 1 <issue_comment>username_2: Thanks username_1, solved with C++11 Compiler flag Upvotes: -1
2018/03/16
968
4,138
<issue_start>username_0: I've read several dozen posts, many dating back years, and cannot come up with a modern, safe and reliable way to update a special value in several thousand records as a single query. I loop over all the records in the table, determine a DateTime value based on some special logic and then run this simple query to update that value... over 3500 times. That's a lot of trips over the wire. ``` UPDATE ScheduleTickets SET ScheduledStartUTC = @ScheduledStartUTC WHERE ScheduleId = @ScheduleId AND PatchSessionId = @PatchSessionId ``` I've seen comments to not waste memory by saving to and using a DataTable. I've seen solutions that use a StringBuilder to dynamically create an update query but that feels insecure/dirty. Sure, the entire process takes less than a minute but there must be a better way. So, after figuring out the DateTime value, I call... ``` UpdateScheduleTicketStart(ScheduleId, PatchSessionId, scheduledDateTime); ``` Which looks like this... ``` private static void UpdateScheduleTicketStart(long scheduleId, long patchSessionId, DateTime scheduledStartUTC) { using (SqlConnection c = ConnectVRS()) { SqlCommand cmd = new SqlCommand(@" UPDATE ScheduleTickets SET ScheduledStartUTC = @ScheduledStartUTC WHERE ScheduleId = @ScheduleId AND PatchSessionId = @PatchSessionId ", c); cmd.Parameters.Add("@ScheduleId", SqlDbType.BigInt).Value = scheduleId; cmd.Parameters.Add("@PatchSessionId", SqlDbType.BigInt).Value = patchSessionId; cmd.Parameters.Add("@ScheduledStartUTC", SqlDbType.VarChar).Value = scheduledStartUTC; cmd.ExecuteNonQuery(); } } ``` How can I pass all the values to SQL Server in one call or how can I create a single SQL query to do the updates in one fell swoop?<issue_comment>username_1: If the values are in another table, use a `join`: ``` UPDATE st SET ScheduledStartUTC = ot.ScheduledStartUTC FROM ScheduleTickets st JOIN OtherTable ot ON st.ScheduleId = ot.ScheduleId AND st.PatchSessionId = ot.PatchSessionId; ``` You don't specify the special logic but you can probably express it in SQL. Upvotes: 2 <issue_comment>username_2: Many people have suggested using a TableValueParameter, and I agree it would be a good method. Here is an example of how you could do that: First Create a TVP and Stored Proc in SQL Server ``` CREATE TYPE [dbo].[SchdeuleTicketsType] As Table ( ScheduledStartUTC DATETIME NOT NULL , ScheduleId INT NOT NULL , PatchSessionId INT NOT NULL ) CREATE PROCEDURE [dbo].[usp_UpdateTickets] ( @ScheduleUpdates As [dbo].[SchdeuleTicketsType] Readonly ) AS Begin UPDATE t1 SET t1.ScheduledStartUTC = t2.ScheduledStartUTC FROM ScheduleTickets AS t1 INNER JOIN @ScheduleUpdates AS t2 ON t1.ScheduleId = t2.ScheduleId AND t1.PatchSessionId = t2.PatchSessionId End ) ``` Next Modify your code to populate a table and pass that as a parameter to the stored proc: ``` private void Populate() { DataTable dataTable = new DataTable("SchdeuleTicketUpdates"); //we create column names as per the type in DB dataTable.Columns.Add("ScheduledStartUTC", typeof(DateTime)); dataTable.Columns.Add("ScheduleId", typeof(Int32)); dataTable.Columns.Add("PatchSessionId", typeof(Int32)); //write you loop to populate here //call the stored proc using (var conn = new SqlConnection(connString)) { var command = new SqlCommand("[usp_UpdateTickets]"); command.CommandType = CommandType.StoredProcedure; var parameter = new SqlParameter(); //The parameter for the SP must be of SqlDbType.Structured parameter.ParameterName = "@ScheduleUpdates"; parameter.SqlDbType = System.Data.SqlDbType.Structured; parameter.Value = dataTable; command.Parameters.Add(parameter); command.ExecuteNonQuery(); } } ``` Upvotes: 2
2018/03/16
870
3,624
<issue_start>username_0: Understandably, PWAs (Progressive Web Apps) are added to home screen after the user visits the web app in a supported browser and clicks on "Add to Home Screen". This works fine for publicly available PWAs. Thinking of Enterprise Android applications, which needs to be installed to thousands of devices via some app push tools like [Airwatch](https://www.air-watch.com/services/deployment/), its practically not possible to have someone open the browser, put the URL and then add to home screen in all thousands of devices. **Is there any other way to automate this deploy/add icon to home screen of a Progressive Web App, not needing the user to visit the web app in a browser, and clicking on "Add to Home Screen" ?** One option we thought about is wrapping in Cordova, but we're trying to find a solution without such wrapper.<issue_comment>username_1: Chrome for Android generates and signs .apk file on the fly using WebPack, when the user clicks on "Add to Home screen" (from menu or install banner) option and if the site has a valid manifest.json and service worker. **Extracting and distributing APK**: This .apk can be located and exported to desktop using file explorer tools. For some reason, some of default file explorer tools couldn't locate this .apk file. Once exported, this .apk can be used to distribute in controlled environment..like in enterprise devices, where you can enforce the deceive to have Chrome Browser. If this .apk is installed to devices which don't have Chrome browser, user will get a message saying "Chrome" is required to open this app. Once installed, installed PWA apk can be used. **For distributing PWA apps through Play store**, google is streamlining the process. **A google engineers repose on building APKs (March-2018)**, when we reached out to them for our enterprise needs. "Well done extracting the APK and deploying it, it should give a good experience to end users, but I agree it shouldn't be that complicated to deploy web apps on Android. We are currently working on a streamlined web apps feature, with which you wouldn't need to manipulate or build APKs. This feature will be available on managed devices using Play to deliver apps" **Alternate options:** If you think your user base may not have Chrome or don't want to rely on that dependency, wrapping with Cordova kind of hybrid solution is the only way to build your PWA apps for distribution in app stores. With this option, if the "webpack" in the device has the [version 40](https://chromereleases.googleblog.com/2015/03/android-webview-update.html)+, user will get PWA benefits. Otherwise, it will still work as a regular hosted web app. **Update on TWA - [Trusted web activity](https://developers.google.com/web/updates/2019/02/using-twa)** is the official way to pack PWA for Android and its available in Chrome 72 and it also supports private/enterprise web apps as the digital assets validation happens in browser now(it use to happen in cloud, making this solution not possible for private web apps). Upvotes: 5 [selected_answer]<issue_comment>username_2: Trusted Web activities are a new way to integrate your web-app content such as your PWA with your Android app using a similar protocol to Chrome Custom Tabs. [Trusted Web activities](https://developers.google.com/web/updates/2017/10/using-twa) Upvotes: 1 <issue_comment>username_3: You can upload the PWA to Playstore using tools like [PWA2APK](https://appmaker.xyz/pwa-to-apk/). Just need to share the play store URL to the users, from which they can download PWA app like normal Android App. Upvotes: 2
2018/03/16
1,094
4,601
<issue_start>username_0: I did a lot of research now to find out the exact differences and trade-offs between a React Native project, CRNA and an Expo project. My main guidance was [this](https://stackoverflow.com/questions/39170622/what-is-the-difference-between-expo-and-react-native/49324689#49324689) However I still don't understand what (dis-)advantages I have using ExpoKit with native code vs. a normal React Native project with native code, apart from the fact that I can't use Expo APIs in a normal React Native project. I know that when I start a project in Expo I can eject it either as ExpoKit project or as React Native project. In both I can use native code. In ExpoKit I can still use the Expo APIs in a normal React Native project I can't. So my questions: 1. What would be my interest to use a react native project if I can use native code and all Expo APIs in an ExpoKit project? In an ExpoKit project I can still use all Expo APIs and all React Native APIs, right?! 2. Could I use Expo APIs in a React Native project if I install expo with `npm install --save expo`? 3. What is the difference between React Native API and Expo API?<issue_comment>username_1: ExpoKit is kind of a hybrid between "pure JS" Expo apps and "vanilla" React Native. At its core it's still a React Native project, but a few things differ about the build system, developer experience, and features available. Features -------- As of today, most of the APIs in Expo's SDK are not available in a vanilla React Native project, but they are available in ExpoKit. We think this might change in the future, but it'll be a lot of work. Expo's push notification service does not currently work in ExpoKit, nor for vanilla React Native. Build System ------------ Both vanilla RN apps and ExpoKit apps use Xcode and Android Studio to build the native code. iOS ExpoKit apps use CocoaPods to install dependencies, which can add a little bit of complexity to managing the native build. Android ExpoKit apps have additional Gradle configuration to build multiple versions of React Native into the same binary (this is used to enable over-the-air updates of multiple SDK versions of JS), which can sometimes increase the complexity of adding other React Native libraries. The JavaScript for an ExpoKit and React Native project is built by Metro, although in ExpoKit you need to run Metro using Expo's XDE or exp tools so that they can handle extra configuration for the project. This means you run a command like `exp start` rather than `react-native run-android`. Due to the current design of ExpoKit (although this may change in the future), some open source React Native libraries may have compatibility issues with ExpoKit. For example, if a native library expects to be able to request a reference to React Native's OkHttp instance on Android, there may be a type mismatch when running inside ExpoKit due to the namespacing Expo uses to allow multiple versions of React Native to compile. That said, these issues tend to be pretty rare, and we're working on a few different ways for ExpoKit to be more and more compatible with libraries in the ecosystem. This multi-version support also means that ExpoKit binaries tend to be larger than corresponding vanilla React Native binaries, although that may change in the future. Your subquestions ----------------- 1. Some developers prefer to manage how their JS bundles and assets are distributed themselves, or they may need a library which isn't currently compatible with ExpoKit. Binary size is another reason why you may prefer vanilla RN. 2. It's not currently possible to use most Expo APIs in a React Native project. Some APIs available in the Expo SDK are bundled from open source projects (such as react-native-maps) and can be used with a vanilla RN project. 3. I'm not sure how to parse this question -- Expo APIs are currently just React Native APIs which know how to talk to each other and make certain assumptions about the environment they're running in. Question from comment 1: You can modify the ExpoKit build all you want, although it may make it slightly harder to upgrade to a newer SDK release depending on how heavily you edit it. Upvotes: 6 [selected_answer]<issue_comment>username_2: Note to anyone who stumbles across this after 2020 ... ExpoKit has been deprecated so this whole question is moot. From Expo's [documentation](https://docs.expo.io/workflow/glossary-of-terms/#expokit): "ExpoKit is deprecated and support for ExpoKit will be removed after SDK 38. We recommend ejecting to the bare workflow instead." Upvotes: 0
2018/03/16
1,024
4,311
<issue_start>username_0: I need to upload a file to Google Cloud Storage and make it public immediately. I use `createWriteStream`. If I try to write a file and after a finish immediately call `makePublic` - I get an error about that no such object. For now, I use setTimeout with `makePublic` callback, but it's a hacky way. the code looks like this: ```js const name = 'folder/file.jpg' const fileRef = bucket.file(name) const save = (stream, name, meta) => new Promise((resolve, reject) => { stream .on('error', (error) => reject(error)}) .on('end', () => resolve(name)) .pipe(fileRef.createWriteStream({ metadata: meta, })) }) save(yourReadStream(), name, yourMetaObj) .then(() => { setTimeout(() => { fileRef.makePublic() }, 1500) }) ``` How to avoid setTimeout and do it in a guaranteed way?<issue_comment>username_1: ExpoKit is kind of a hybrid between "pure JS" Expo apps and "vanilla" React Native. At its core it's still a React Native project, but a few things differ about the build system, developer experience, and features available. Features -------- As of today, most of the APIs in Expo's SDK are not available in a vanilla React Native project, but they are available in ExpoKit. We think this might change in the future, but it'll be a lot of work. Expo's push notification service does not currently work in ExpoKit, nor for vanilla React Native. Build System ------------ Both vanilla RN apps and ExpoKit apps use Xcode and Android Studio to build the native code. iOS ExpoKit apps use CocoaPods to install dependencies, which can add a little bit of complexity to managing the native build. Android ExpoKit apps have additional Gradle configuration to build multiple versions of React Native into the same binary (this is used to enable over-the-air updates of multiple SDK versions of JS), which can sometimes increase the complexity of adding other React Native libraries. The JavaScript for an ExpoKit and React Native project is built by Metro, although in ExpoKit you need to run Metro using Expo's XDE or exp tools so that they can handle extra configuration for the project. This means you run a command like `exp start` rather than `react-native run-android`. Due to the current design of ExpoKit (although this may change in the future), some open source React Native libraries may have compatibility issues with ExpoKit. For example, if a native library expects to be able to request a reference to React Native's OkHttp instance on Android, there may be a type mismatch when running inside ExpoKit due to the namespacing Expo uses to allow multiple versions of React Native to compile. That said, these issues tend to be pretty rare, and we're working on a few different ways for ExpoKit to be more and more compatible with libraries in the ecosystem. This multi-version support also means that ExpoKit binaries tend to be larger than corresponding vanilla React Native binaries, although that may change in the future. Your subquestions ----------------- 1. Some developers prefer to manage how their JS bundles and assets are distributed themselves, or they may need a library which isn't currently compatible with ExpoKit. Binary size is another reason why you may prefer vanilla RN. 2. It's not currently possible to use most Expo APIs in a React Native project. Some APIs available in the Expo SDK are bundled from open source projects (such as react-native-maps) and can be used with a vanilla RN project. 3. I'm not sure how to parse this question -- Expo APIs are currently just React Native APIs which know how to talk to each other and make certain assumptions about the environment they're running in. Question from comment 1: You can modify the ExpoKit build all you want, although it may make it slightly harder to upgrade to a newer SDK release depending on how heavily you edit it. Upvotes: 6 [selected_answer]<issue_comment>username_2: Note to anyone who stumbles across this after 2020 ... ExpoKit has been deprecated so this whole question is moot. From Expo's [documentation](https://docs.expo.io/workflow/glossary-of-terms/#expokit): "ExpoKit is deprecated and support for ExpoKit will be removed after SDK 38. We recommend ejecting to the bare workflow instead." Upvotes: 0
2018/03/16
898
3,951
<issue_start>username_0: Here is my query: ``` select CASE WHEN ATTRIBUTE_SOURCE is NULL THEN ELSE ATTRIBUTE\_SOURCE END AS ATTRIBUTE\_SOURCE, \* from () ``` The calculation is working fine, the problem is the resulting table has two `ATTRIBUTE_SOURCE` columns, one from the original table and another that is added by the CASE statement. Is there a way to *replace* the `ATTRIBUTE_SOURCE` column in the original table entirely with the `ATTRIBUTE_SOURCE` column that's generated by my CASE statement?<issue_comment>username_1: ExpoKit is kind of a hybrid between "pure JS" Expo apps and "vanilla" React Native. At its core it's still a React Native project, but a few things differ about the build system, developer experience, and features available. Features -------- As of today, most of the APIs in Expo's SDK are not available in a vanilla React Native project, but they are available in ExpoKit. We think this might change in the future, but it'll be a lot of work. Expo's push notification service does not currently work in ExpoKit, nor for vanilla React Native. Build System ------------ Both vanilla RN apps and ExpoKit apps use Xcode and Android Studio to build the native code. iOS ExpoKit apps use CocoaPods to install dependencies, which can add a little bit of complexity to managing the native build. Android ExpoKit apps have additional Gradle configuration to build multiple versions of React Native into the same binary (this is used to enable over-the-air updates of multiple SDK versions of JS), which can sometimes increase the complexity of adding other React Native libraries. The JavaScript for an ExpoKit and React Native project is built by Metro, although in ExpoKit you need to run Metro using Expo's XDE or exp tools so that they can handle extra configuration for the project. This means you run a command like `exp start` rather than `react-native run-android`. Due to the current design of ExpoKit (although this may change in the future), some open source React Native libraries may have compatibility issues with ExpoKit. For example, if a native library expects to be able to request a reference to React Native's OkHttp instance on Android, there may be a type mismatch when running inside ExpoKit due to the namespacing Expo uses to allow multiple versions of React Native to compile. That said, these issues tend to be pretty rare, and we're working on a few different ways for ExpoKit to be more and more compatible with libraries in the ecosystem. This multi-version support also means that ExpoKit binaries tend to be larger than corresponding vanilla React Native binaries, although that may change in the future. Your subquestions ----------------- 1. Some developers prefer to manage how their JS bundles and assets are distributed themselves, or they may need a library which isn't currently compatible with ExpoKit. Binary size is another reason why you may prefer vanilla RN. 2. It's not currently possible to use most Expo APIs in a React Native project. Some APIs available in the Expo SDK are bundled from open source projects (such as react-native-maps) and can be used with a vanilla RN project. 3. I'm not sure how to parse this question -- Expo APIs are currently just React Native APIs which know how to talk to each other and make certain assumptions about the environment they're running in. Question from comment 1: You can modify the ExpoKit build all you want, although it may make it slightly harder to upgrade to a newer SDK release depending on how heavily you edit it. Upvotes: 6 [selected_answer]<issue_comment>username_2: Note to anyone who stumbles across this after 2020 ... ExpoKit has been deprecated so this whole question is moot. From Expo's [documentation](https://docs.expo.io/workflow/glossary-of-terms/#expokit): "ExpoKit is deprecated and support for ExpoKit will be removed after SDK 38. We recommend ejecting to the bare workflow instead." Upvotes: 0
2018/03/16
386
1,341
<issue_start>username_0: I am trying to implement a screen with a map and on top on the map it should have a textbar displaying text but for some reason my textbar always goes below the google map view. I tried to look up for relateable questions but no success with their answers. The picture is attached below to give some more gudience. Code is below. ``` xml version="1.0" encoding="utf-8"? ``` [Image of desirable output](https://i.stack.imgur.com/6nBuN.png)<issue_comment>username_1: try using this in your textView ``` android:layout_alignParentTop="true" ``` **layout\_alignParentTop** > > If true, makes the top edge of this view match the top edge of the > parent. Accommodates top margin. > > > > ``` > May be a boolean value, such as "true" or "false". > > ``` > > You can also try using **android:layout\_alignTop** which Makes the top edge of this view match the top edge of the given anchor view ID. Accommodates top margin. Upvotes: 1 [selected_answer]<issue_comment>username_2: Try the next layout. If you want to display something in front then you must put it at last in the layout. **EDIT** Try this code instead. I´m using it in a personal project and show buttons in front of the map. But of course im using a SupportMapFragment, maybe you can try using it too. ``` ``` Hope it helps! Upvotes: 1
2018/03/16
710
2,680
<issue_start>username_0: I'm using the generic `IdentityUser` class for my user model and with using `string` as primary key everything works fine, but I would like to use `long`as key type. So, I'm using the generic version of IdentityUser. Now I discovered that the UserManager has the following definition: ``` public class UserManager : IDisposable where TUser : class ``` and the following function ``` public virtual Task FindByIdAsync(string userId); ``` It seems that this doesn't fit together because the function requires a string as key/ID. `TUser` must be `TUser`. The question is now if there is any performance loss because of parsing string to long (of course there is) or is it better to get the user object direct from DbContext with a normal database select? Does the UserManager has any benefits so that it would be recommended to use it even with the string key requirement? Thanks!<issue_comment>username_1: From [this](https://learn.microsoft.com/en-us/aspnet/core/security/authentication/identity-primary-key-configuration?tabs=aspnetcore2x) link you can see that it is possible to change type of the key. Initially it is set to string, so that you can use values like int or guid out of the box. Code taken from mentioned link. Firstly change your user to desired key type: ``` public class ApplicationUser : IdentityUser { } ``` Edit Application Role class: ``` public class ApplicationRole : IdentityRole { } ``` In application context class, add your new key: ``` public class ApplicationDbContext : IdentityDbContext ``` Upvotes: 0 <issue_comment>username_2: You can setup your `User` table to use a different type for a primary key by extending the `IdentityUser` class like so: `public class MyUser : IdentityUser` And then injecting `UserManager` into your classes to manage them. <https://learn.microsoft.com/en-us/aspnet/core/security/authentication/identity-primary-key-configuration?tabs=aspnetcore2x> Upvotes: 1 <issue_comment>username_3: `FindByIdAsync` always takes string because it is a common denominator of every class. Instead of polluting with generics all over the place (like in Identity v2) you will have to convert your digits to string and pass it on. On DB side it is [handled by EF](https://github.com/aspnet/Identity/blob/87a956e49415581e3767d8de66a1f5a326f02194/src/EF/UserStore.cs#L234) anyway for you. You can have your own UserManager class that inherits from `UserManager` and have a method override `FindByIdAsync(long id)` that just does `id.ToString()` ``` public virtual Task FindByIdAsync(long userId) { return base.FindByIdAsync(userId.ToString()); } ``` Upvotes: 2 [selected_answer]
2018/03/16
349
1,184
<issue_start>username_0: I tried this code, expecting it to use IPython's `display` function: ``` import pandas as pd data = pd.DataFrame(data=[tweet.text for tweet in tweets], columns=['Tweets']) display(data.head(10)) ``` But I get an error message that says `NameError: name 'display' undefined`. Why? How do I make it so that I can use `display`?<issue_comment>username_1: `display` is a function in the `IPython.display` module that runs the appropriate dunder method to get the appropriate data to ... display. If you really want to run it ``` from IPython.display import display import pandas as pd data = pd.DataFrame(data=[tweet.text for tweet in tweets], columns=['Tweets']) display(data.head(10)) ``` But don't. IPython is already doing that for you. Just do: ``` data.head(10) ``` --- You even might have IPython uninstalled, try: ``` pip install IPython ``` or if running pip3: ``` pip3 install IPython ``` Upvotes: 6 <issue_comment>username_2: To solve the problem on pycaret, you have to open the below file - ``` ..\env\Lib\site-packages\pycaret\datasets.py ``` and add the line of code - ``` from IPython.display import display ``` Upvotes: 3
2018/03/16
921
3,471
<issue_start>username_0: I'm doing Selenium JAVA tests for mobile using the Chrome driver with its emulator. The problem is that I can't use the most advanced mobile devices like iPhone 7,8 etc even-though its in my drop-down list when manually testing in the devtools. This is the driver init. that works perfectly with many mobile devices: ``` if(browser.equalsIgnoreCase("mobileIPhone6")){ Map mobileEmulation = new HashMap(); mobileEmulation.put("deviceName", "iPhone 6"); String exePathChromeDriver = Consts.chromeDriverPath; System.setProperty("webdriver.chrome.driver", exePathChromeDriver); PropertyLoader.loadCapabilities(); ChromeOptions chromeOptions = new ChromeOptions(); chromeOptions.setExperimentalOption("mobileEmulation", mobileEmulation); driver = new ChromeDriver(chromeOptions); ``` BUT, When I change line 3 to "iPhone 7" I'm getting this error: ``` 2018-03-16 21:25:49 INFO LogLog4j:210 - org.openqa.selenium.WebDriverException: unknown error: cannot parse capability: chromeOptions from unknown error: cannot parse mobileEmulation from unknown error: 'iPhone 7' must be a valid device from unknown error: must be a valid device ``` Any idea why? many thanks<issue_comment>username_1: Why not use [Dimension](https://seleniumhq.github.io/selenium/docs/api/java/org/openqa/selenium/Dimension.html) class? Define an enum EmulatedDevices and use following style: ``` driver.manage().window().setSize(new Dimension(EmulatedDevices.IPHONE7.getWidth(),EmulatedDevices.IPHONE7.getHeight())); ``` Upvotes: 0 <issue_comment>username_2: I've found the solution and it is now working properly. I had to set the Device Metrics object first: ``` ...else if(browser.equalsIgnoreCase("iPhone678")){ String exePathChromeDriver = Consts.chromeDriverPath; System.setProperty("webdriver.chrome.driver", exePathChromeDriver); Map deviceMetrics = new HashMap<>(); deviceMetrics.put("width", 375); deviceMetrics.put("height", 667); deviceMetrics.put("pixelRatio", 2.0); Map mobileEmulation = new HashMap<>(); mobileEmulation.put("deviceMetrics", deviceMetrics); mobileEmulation.put("userAgent", "Mozilla/5.0 (iPhone; CPU iPhone OS 11\_0 like Mac OS X) AppleWebKit/604.1.34 (KHTML, like Gecko) Version/11.0 Mobile/15A5341f Safari/604.1"); ChromeOptions chromeOptions = new ChromeOptions(); chromeOptions.setExperimentalOption("mobileEmulation", mobileEmulation); driver = new ChromeDriver(chromeOptions); } ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: This should work: ``` Map mobileEmulation = new HashMap<>(); mobileEmulation.put("deviceName", "iPhone 6"); ChromeOptions chromeOptions = new ChromeOptions(); chromeOptions.setExperimentalOption("mobileEmulation", mobileEmulation); driver = new ChromeDriver(chromeOptions); driver.get("https://www.google.co.uk"); ``` Upvotes: 0 <issue_comment>username_4: You can use all the devices mentioned in the 'Emulated Devices' list within Chrome devtools > Devices. If the desired device is not present within the list you can 'Add custom device' ([screenshot](https://i.stack.imgur.com/Crg4r.png))- providing basic information, and use the `deviceName` within your code. In another way, you can directly provide your desired device's basic information (`width, height, pixelRatio`) within your automation code to use that device for testing. You can get the details code with explanation [here](https://youtu.be/UOwEcGr3P9A). Upvotes: 0
2018/03/16
662
1,746
<issue_start>username_0: Is there a way to do a merge in pandas limiting the columns you want to see? What I have: df1 ``` ID Col1 Col2 Col3 Col4 1 1 1 1 D 2 A C C 4 3 B B B d 4 X 2 3 6 ``` df2 ``` ID ColA ColB ColC ColD 1 1 1 1 D 2 A C X 4 3 B B Y d ``` What I want: df\_final ``` ID ColA ColB ColC ColD 1 NA NA NA NA 2 A C X 4 3 B B Y d 4 NA NA NA NA ``` I want to do a left join on two dataframes (keeping all IDs from df1) but I only want to keep the columns from df2. I also only want values if Col3 from df1 is either C or B. The following works but the resulting df includes all columns from both dfs. I can add a third line to only see the columns I want but this is a simple example. In reality I have much larger datasets and its difficult to manually input all the column names I want to keep. ``` df=pd.merge(df1,df2,how='left',on='ID') df_final=df[df['Col3'].isin['C','B']] ``` Equivalent SQL would be ``` create table df_final as select b.* from df1 a left join df2 b on a.ID=b.ID where a.Col3 in ('C','B') ```<issue_comment>username_1: Mask `df1` with your `isin` condition before the `merge`: ``` df1.where(df1.Col3.isin(['C', 'B']))[['ID']].merge(df2, how='left', on='ID') ``` Or, ``` df1.mask(~df1.Col3.isin(['C', 'B']))[['ID']].merge(df2, how='left', on='ID') ``` ``` ID ColA ColB ColC ColD 0 NaN NaN NaN NaN NaN 1 2 A C X 4 2 3 B B Y d 3 NaN NaN NaN NaN NaN ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: This should do the trick ``` df=pd.merge(df1[df1.Col3.isin(['C','B'])][['ID']], df2, how='left', on='ID') ``` Upvotes: 0
2018/03/16
522
1,361
<issue_start>username_0: I have a small issue. I have a json file which I fetch data from. When I print\_r() the data, I see the field I want. But trying to call them, only 2 on 3 works, one seems to not be fetch-able. Here the code, if someone have an idea about what's wrong: Original JSON : ``` [ { "ņame": "Xcoin", "rate": "100.0000", "status": "online" } ] ``` The JSON with print\_r() ``` Array ( [ņame] => XCoin [rate] => 100.0000 [status] => online ) ``` When I fetch individually each fields: ``` echo $coin['name']." "; echo $coin['rate']." "; echo $coin['status']." "; ``` The result of the previous code: ``` 100.0000 online ``` Like if the name was not there! How's possible? I have others array and name fetch correctly, using same format.<issue_comment>username_1: Mask `df1` with your `isin` condition before the `merge`: ``` df1.where(df1.Col3.isin(['C', 'B']))[['ID']].merge(df2, how='left', on='ID') ``` Or, ``` df1.mask(~df1.Col3.isin(['C', 'B']))[['ID']].merge(df2, how='left', on='ID') ``` ``` ID ColA ColB ColC ColD 0 NaN NaN NaN NaN NaN 1 2 A C X 4 2 3 B B Y d 3 NaN NaN NaN NaN NaN ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: This should do the trick ``` df=pd.merge(df1[df1.Col3.isin(['C','B'])][['ID']], df2, how='left', on='ID') ``` Upvotes: 0
2018/03/16
379
1,037
<issue_start>username_0: I am trying to show a `div` element only whenever user inputs some text in a different field. **Text Field:** **Div:** ``` div class="col-xs-12 text-left" id="continue" style="display: none;"> Continue ``` **JaveScript:** ``` if($('feedUrl').val()){ $('#continue').show(); } ``` The issue I am having is that the Continue div does not show up whenever I write in the `feedUrl` input. Am I missing something?<issue_comment>username_1: Mask `df1` with your `isin` condition before the `merge`: ``` df1.where(df1.Col3.isin(['C', 'B']))[['ID']].merge(df2, how='left', on='ID') ``` Or, ``` df1.mask(~df1.Col3.isin(['C', 'B']))[['ID']].merge(df2, how='left', on='ID') ``` ``` ID ColA ColB ColC ColD 0 NaN NaN NaN NaN NaN 1 2 A C X 4 2 3 B B Y d 3 NaN NaN NaN NaN NaN ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: This should do the trick ``` df=pd.merge(df1[df1.Col3.isin(['C','B'])][['ID']], df2, how='left', on='ID') ``` Upvotes: 0
2018/03/16
625
1,938
<issue_start>username_0: I've hacked my way around this issue, but I'd like to implement a better code. I'm trying to determine the last date where a non-zero financial event occurs. Note that this is a stripped and altered XML for brevity so excuse any formatting oddities. The actual XML I'm using has Change amounts from 1/1/2018 through 12/31/2029 with everything from 4/1/2018 to 12/31/2029 totaling to 0. ``` ``` In the above snippet I'd want the 3/1/2018 date, since the 4/1/2018 date totals to a 0 debit. Using XSLT 2.0, I've used the following code to exclude the date of 4/1/2018, but I haven't figured a way to only return the 3/1/2018 date. Every thing I've tried has returned the 4/1/2018 as the last date in the series. ``` ``` As I said, I did some hacking (hidden cells in Excel), but I really want to handle it all in code as it appears this date might become required as part of later filtering within code.<issue_comment>username_1: I think you can use the sum of the current-group for the check and then you need to output those elements in a variable and then take the first item in that variable value: ``` {Date/@date} ``` <https://xsltfiddle.liberty-development.net/pPgCcoC> The `{Date/@date}` is XSLT 3, in XSLT 2 you need . Upvotes: 3 [selected_answer]<issue_comment>username_2: Here's another option that is similar to Martin's. I like Martin's better (+1), but since I'd already written it I'll go ahead and add it. **XSLT 2.0** ``` ``` **Output** ``` 3/1/2018 ``` Working fiddle: <https://xsltfiddle.liberty-development.net/3Nqn5Yn/1> Upvotes: 0 <issue_comment>username_3: I would go for a two-phase approach: group the data first, then do what you need to with the groups. In the first phase, you transform ``` ``` to ``` ``` (which is a trivial use of xsl:for-each-group) In the second phase, you apply filtering, for example `Date[sum(Change/@amount)!=0)][last()]` Upvotes: 0
2018/03/16
552
1,743
<issue_start>username_0: I need to order by date edited or date posted. For example if an item date posted is 2018-03-16 20:40:00 and another item date edited is 2018-03-16 21:40:00 I want to show first the item that is closest time to current in date posted or date edited. I tried this: ``` "SELECT * FROM table ORDER BY date_posted DESC, date_edited DESC LIMIT 20" ``` However, if date\_edited is higher time then in date\_posted it shows that item (date\_edited) and the rest of them after it not ordering by the time. I also tried: ``` "ORDER BY date_posted DESC OR date_edited DESC..." ``` But that did not work at all. So basically, I want to order it by the time from date\_posted or date\_edited to simplify it a little.<issue_comment>username_1: I think you can use the sum of the current-group for the check and then you need to output those elements in a variable and then take the first item in that variable value: ``` {Date/@date} ``` <https://xsltfiddle.liberty-development.net/pPgCcoC> The `{Date/@date}` is XSLT 3, in XSLT 2 you need . Upvotes: 3 [selected_answer]<issue_comment>username_2: Here's another option that is similar to Martin's. I like Martin's better (+1), but since I'd already written it I'll go ahead and add it. **XSLT 2.0** ``` ``` **Output** ``` 3/1/2018 ``` Working fiddle: <https://xsltfiddle.liberty-development.net/3Nqn5Yn/1> Upvotes: 0 <issue_comment>username_3: I would go for a two-phase approach: group the data first, then do what you need to with the groups. In the first phase, you transform ``` ``` to ``` ``` (which is a trivial use of xsl:for-each-group) In the second phase, you apply filtering, for example `Date[sum(Change/@amount)!=0)][last()]` Upvotes: 0
2018/03/16
398
1,339
<issue_start>username_0: I have tried expressions with suppressMessages(expr), suppressWarnings(expr), but they keep outputting messages. eg: ``` suppressWarnings(ksvm(y~., data=data, type='C-svc', cross=5, kernel=kernel)) ``` keeps generating this message. > > Setting default kernel parameters > > > How do I suppress messages from libraries? Is there a way to do this globally? Have tried: ``` {r messages=FALSE, warnings=FALSE} ```<issue_comment>username_1: If it does not say that it is a warning, you should use `suppressMessages`. Try putting the function call in braces: ``` suppressMessages({ksvm(y~., data=data, type='C-svc', cross=5, kernel=kernel)}) ``` Upvotes: 2 <issue_comment>username_2: Here is the link to the line where the output is generated: <https://github.com/cran/kernlab/blob/master/R/ksvm.R#L88> Looking at that we see that the message is displayed with `cat()` not with `message()`. `suppressMessages()` does not suppress the cat output. There are multiple ways to get rid of the `cat` output. One is to capture the message and then hide it like so: ``` invisible(capture.output(ksvm(...))) ``` Upvotes: 5 [selected_answer]<issue_comment>username_3: You can pass an empty list to the kpar argument. Like `ksvm(y~., data=data, type='C-svc', cross=5, kernel=kernel, kpar = list())` Upvotes: 2
2018/03/16
571
2,008
<issue_start>username_0: I am in a virtualenv and trying to run through pip installs. I know the code works because outside the virtualenv this code has worked. I am running on a Windows 10 machine. Using the Git Bash terminal or the regular command prompt (have tried as admin and regular user). I am trying to run `pip install dotenv` or `python -m pip install dotenv` and neither of the two work. I get the error saying > > AttributeError: module 'importlib.\_bootstrap' has no attribute 'SourceFileLoader' > During handling of the above exception, another > exception occurred:Command "python setup.py egg\_info" failed with > error code 1 in > C:\Users\USER~1\AppData\Local\Temp\pip-build-7bbdcnx2\dotenv\ > > > I have also tried to do things such as `python -m pip install setuptools --upgrade` un-install setuptools and install it again. restart my computer. and many other things. I'm not too sure what else to try (i've installed *flask* before this and it worked, I also can install other things like *mitmproxy* as an example) Any ideas? Again, this is a Windows 10 machine and I just want to install **dotenv** for Python (version 3.6.4 if it matters). Thank you.<issue_comment>username_1: You should install `python-dotenv` ```sh pip3 install python-dotenv ``` or ```sh pip install python-dotenv ``` i.e ``` C:\Users\USER>pip3 install python-dotenv Collecting python-dotenv Downloading python_dotenv-0.8.2-py2.py3-none-any.whl Installing collected packages: python-dotenv Successfully installed python-dotenv-0.8.2 ``` Refer this [issue](https://github.com/theskumar/python-dotenv/issues/53) Upvotes: 8 [selected_answer]<issue_comment>username_2: for me this error occurred while trying to install **pip install dotenv** > > This error originates from a subprocess, and is likely not a problem > with pip. > error: metadata-generation-failed > > Encountered error while generating package metadata. > > > **pip3 install python-dotenv** worked for me Upvotes: 2
2018/03/16
1,326
4,464
<issue_start>username_0: My app (compiled with VS20**15**) loads a 3rd party DLL(compiled with VS20**08**). Running the app in release requires only the redistributables for VS2008 and VS2015 from the MS webpage on the target machine. Running the app in debug mode (e.g. on another developer machine) requires the **debug** redistributables of VS2008, which should be avoided. Copying the msvcX90**d**.dll next to the 3rd party DLL does not lead to success. Any ideas how I can convince Windows to load the VS2008 debug runtime? Does my application need a manifest and which one is it? **FYI about mixing runtimes** Yes, I am not happy with mixed runtimes but in my context both runtimes will not intefer with each other. And recompiling the 3rd party DLL is not an option. To simplify the problem even more. How can I load msvcr90d.dll in a VS2015 compiled application? ``` LoadLibrary(L"msvcr90d.dll"); ``` [![enter image description here](https://i.stack.imgur.com/Wnzw9.png)](https://i.stack.imgur.com/Wnzw9.png)<issue_comment>username_1: After reading the helfpful resources from the comments, here is my approach how to load the debug redistributable of VS2008 (msvcr90d.dll) in VS2015 (release and debug). First your application needs a manifest file. What is a manifest file? This page summarizes it pretty well: <http://www.samlogic.net/articles/manifest.htm> > > A manifest is a XML file that contains settings that informs Windows > how to handle a program when it is started. The manifest can be > embedded inside the program file (as a resource) or it can be located > in a separate external XML file. > > > A manifest file can be placed next to our executable OR is embedded (default). To switch/change it go to `Projects Property Page -> Linker -> Manifest Tool -> Input and Output -> Embed Manifest`. 1. The pragma in the source below adds a few lines to our (by default) embedded manifest file into our executable. 2. Go to `C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\redist\Debug_NonRedist\amd64\Microsoft.VC90.DebugCRT` and copy all files next to your executable 3. Open the new `Microsoft.VC90.DebugCRT.manifest` and remove the token `publicKeyToken="<KEY>"`. If it is inside, Windows keeps rejecting to load the msvcr90d.dll. Please don't ask me why. 4. Compile and Execute your program ``` #include #pragma comment(linker,"/manifestdependency:\"type='win32' "\ "name='Microsoft.VC90.DebugCRT' "\ "version='9.0.21022.8' "\ "processorArchitecture='amd64' "\ "\"") int main() { auto\* lib = LoadLibrary(L"msvcr90d.dll"); if (!lib) { return -1; } FreeLibrary(lib); return 0; } ``` **Background** Why do we have an external manifest file (Microsoft.VC90.DebugCRT.manifest), and an embedded one (through the pragma) for this solution? It looks like the pragma only offers a limited amount of options how your manifest can look like. One option is to reference to another manifest file which contains more information. [![enter image description here](https://i.stack.imgur.com/gnBCA.png)](https://i.stack.imgur.com/gnBCA.png) [![enter image description here](https://i.stack.imgur.com/YdLci.png)](https://i.stack.imgur.com/YdLci.png) **Edit:** The answers in this MSDN [enter link description here](https://social.msdn.microsoft.com/Forums/vstudio/en-US/523f3e64-aa12-4fcc-bb7f-3aade9b188ac/how-do-i-tell-application-manifest-file-to-use-a-dll-in-current-directory-?forum=vclanguage) thread also helped a lot. **Edit 2:** Instead of referencing from the embedded manifest to the external `Microsoft.VC90.DebugCRT.manifest`, you could already embed this one to your executable (can also be done by the Property Pages), but I came to this conclusion just too late but my initial solution hopefully gives some more insights Upvotes: 3 [selected_answer]<issue_comment>username_2: I just solved a similar problem by embedding my 3rd-party DLL with a manifest. Running depends on my 3rd-party DLL showed that it was looking for msvcr90.dll in C:\Windows\System32. Obviously, that's not the correct location - it should be looking in one of the WinSxS folders. Luckily, for my 3rd party DLL, I had the source code and original makefiles, so I was able to generate the correct manifest file and embed it in the DLL using the mt command: `mt -manifest myDLL.dll.manifest -outputresource:myDLL.dll;2` After that, I was able to load the DLL correctly and my R6034 error went away. Upvotes: 1
2018/03/16
3,005
11,171
<issue_start>username_0: I'm creating a basic android app to learn. below i have the main activity that creates a few textViews on create. I'm trying to clean my project up by putting that large chunk of code in another file called "CreateCategories.java" then call a function, class, or something that runs that code. how do i do this? below is my current program ``` package com.company.practice; import ... . . import java.util.ArrayList; public class MainActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); RelativeLayout categoryLayout = findViewById(R.id.categoryContainer); ArrayList categories = new ArrayList(); categories.add(new Category("rent", "0.35", "0.00")); categories.add(new Category("loan", "0.1", "0.00")); TextView[] catogoryTitleTextView = new TextView[categories.size()]; TextView[] catogorypercentageTextView = new TextView[categories.size()]; TextView[] catogoryAmountTextView = new TextView[categories.size()]; for(int i =0; i < categories.size(); i++){ //initialize textviews TextView title = new TextView(this); TextView percentage = new TextView(this); TextView amount = new TextView(this); //set text views text, id, and textsize title.setText(categories.get(i).title); //title.setTextSize(getResources().getDimension(R.dimen.textsize)); title.setId(i + 100); percentage.setText(categories.get(i).percent); //percentage.setTextSize(getResources().getDimension(R.dimen.textsize)); percentage.setId(i + 200); amount.setText(categories.get(i).amount); //amount.setTextSize(getResources().getDimension(R.dimen.textsize)); amount.setId(i + 300); //set params for title textview RelativeLayout.LayoutParams titleParams = new RelativeLayout.LayoutParams(RelativeLayout.LayoutParams.MATCH\_PARENT, RelativeLayout.LayoutParams.WRAP\_CONTENT); titleParams.addRule(RelativeLayout.ALIGN\_END, R.id.salaryCategoryTextVeiw); titleParams.height = RelativeLayout.LayoutParams.WRAP\_CONTENT; titleParams.width = RelativeLayout.LayoutParams.WRAP\_CONTENT; if(i==0){ //set params for title textview if it the first one. it sets below the textveiw catagory, and has more margin titleParams.addRule(RelativeLayout.BELOW, R.id.salaryCategoryTextVeiw); titleParams.topMargin = 27; } else { //this will look up the id of teh last category text view titleParams.addRule(RelativeLayout.BELOW, catogoryTitleTextView[i-1].getId()); titleParams.topMargin = 15; } title.setLayoutParams(titleParams); //set params for percentage textview RelativeLayout.LayoutParams PercentParams = new RelativeLayout.LayoutParams(RelativeLayout.LayoutParams.MATCH\_PARENT, RelativeLayout.LayoutParams.WRAP\_CONTENT); PercentParams.addRule(RelativeLayout.ALIGN\_START, R.id.salaryPercentTextVeiw); PercentParams.addRule(RelativeLayout.ALIGN\_TOP, title.getId()); PercentParams.height = RelativeLayout.LayoutParams.WRAP\_CONTENT; PercentParams.width = RelativeLayout.LayoutParams.WRAP\_CONTENT; percentage.setLayoutParams(PercentParams); //set params for amount textview RelativeLayout.LayoutParams AmountParams = new RelativeLayout.LayoutParams(RelativeLayout.LayoutParams.MATCH\_PARENT, RelativeLayout.LayoutParams.WRAP\_CONTENT); AmountParams.addRule(RelativeLayout.ALIGN\_START, R.id.salaryAmountTextVeiw); AmountParams.addRule(RelativeLayout.ALIGN\_TOP, percentage.getId()); AmountParams.height = RelativeLayout.LayoutParams.WRAP\_CONTENT; AmountParams.width = RelativeLayout.LayoutParams.WRAP\_CONTENT; amount.setLayoutParams(AmountParams); //add text views to layout categoryLayout.addView(title); categoryLayout.addView(percentage); categoryLayout.addView(amount); //save the views within the arrays catogoryTitleTextView[i] = title; catogorypercentageTextView[i] = percentage; catogoryAmountTextView[i] = amount; } } } ``` i would like it to look something like this: MainActivty.java ``` package com.company.practice; import ... . . import java.util.ArrayList; public class MainActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); } } ``` CreateCategories.java ``` Function CreateCategories() { RelativeLayout categoryLayout = findViewById(R.id.categoryContainer); ArrayList categories = new ArrayList(); categories.add(new Category("rent", "0.35", "0.00")); categories.add(new Category("loan", "0.1", "0.00")); TextView[] catogoryTitleTextView = new TextView[categories.size()]; TextView[] catogorypercentageTextView = new TextView[categories.size()]; TextView[] catogoryAmountTextView = new TextView[categories.size()]; for(int i =0; i < categories.size(); i++){ //initialize textviews TextView title = new TextView(this); TextView percentage = new TextView(this); TextView amount = new TextView(this); //set text views text, id, and textsize title.setText(categories.get(i).title); //title.setTextSize(getResources().getDimension(R.dimen.textsize)); title.setId(i + 100); percentage.setText(categories.get(i).percent); //percentage.setTextSize(getResources().getDimension(R.dimen.textsize)); percentage.setId(i + 200); amount.setText(categories.get(i).amount); //amount.setTextSize(getResources().getDimension(R.dimen.textsize)); amount.setId(i + 300); //set params for title textview RelativeLayout.LayoutParams titleParams = new RelativeLayout.LayoutParams(RelativeLayout.LayoutParams.MATCH\_PARENT, RelativeLayout.LayoutParams.WRAP\_CONTENT); titleParams.addRule(RelativeLayout.ALIGN\_END, R.id.salaryCategoryTextVeiw); titleParams.height = RelativeLayout.LayoutParams.WRAP\_CONTENT; titleParams.width = RelativeLayout.LayoutParams.WRAP\_CONTENT; if(i==0){ //set params for title textview if it the first one. it sets below the textveiw catagory, and has more margin titleParams.addRule(RelativeLayout.BELOW, R.id.salaryCategoryTextVeiw); titleParams.topMargin = 27; } else { //this will look up the id of teh last category text view titleParams.addRule(RelativeLayout.BELOW, catogoryTitleTextView[i-1].getId()); titleParams.topMargin = 15; } title.setLayoutParams(titleParams); //set params for percentage textview RelativeLayout.LayoutParams PercentParams = new RelativeLayout.LayoutParams(RelativeLayout.LayoutParams.MATCH\_PARENT, RelativeLayout.LayoutParams.WRAP\_CONTENT); PercentParams.addRule(RelativeLayout.ALIGN\_START, R.id.salaryPercentTextVeiw); PercentParams.addRule(RelativeLayout.ALIGN\_TOP, title.getId()); PercentParams.height = RelativeLayout.LayoutParams.WRAP\_CONTENT; PercentParams.width = RelativeLayout.LayoutParams.WRAP\_CONTENT; percentage.setLayoutParams(PercentParams); //set params for amount textview RelativeLayout.LayoutParams AmountParams = new RelativeLayout.LayoutParams(RelativeLayout.LayoutParams.MATCH\_PARENT, RelativeLayout.LayoutParams.WRAP\_CONTENT); AmountParams.addRule(RelativeLayout.ALIGN\_START, R.id.salaryAmountTextVeiw); AmountParams.addRule(RelativeLayout.ALIGN\_TOP, percentage.getId()); AmountParams.height = RelativeLayout.LayoutParams.WRAP\_CONTENT; AmountParams.width = RelativeLayout.LayoutParams.WRAP\_CONTENT; amount.setLayoutParams(AmountParams); //add text views to layout categoryLayout.addView(title); categoryLayout.addView(percentage); categoryLayout.addView(amount); //save the views within the arrays catogoryTitleTextView[i] = title; catogorypercentageTextView[i] = percentage; catogoryAmountTextView[i] = amount; } ``` any helpful advice on how to do this would be greatly appreciated.<issue_comment>username_1: After reading the helfpful resources from the comments, here is my approach how to load the debug redistributable of VS2008 (msvcr90d.dll) in VS2015 (release and debug). First your application needs a manifest file. What is a manifest file? This page summarizes it pretty well: <http://www.samlogic.net/articles/manifest.htm> > > A manifest is a XML file that contains settings that informs Windows > how to handle a program when it is started. The manifest can be > embedded inside the program file (as a resource) or it can be located > in a separate external XML file. > > > A manifest file can be placed next to our executable OR is embedded (default). To switch/change it go to `Projects Property Page -> Linker -> Manifest Tool -> Input and Output -> Embed Manifest`. 1. The pragma in the source below adds a few lines to our (by default) embedded manifest file into our executable. 2. Go to `C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\redist\Debug_NonRedist\amd64\Microsoft.VC90.DebugCRT` and copy all files next to your executable 3. Open the new `Microsoft.VC90.DebugCRT.manifest` and remove the token `publicKeyToken="<KEY>"`. If it is inside, Windows keeps rejecting to load the msvcr90d.dll. Please don't ask me why. 4. Compile and Execute your program ``` #include #pragma comment(linker,"/manifestdependency:\"type='win32' "\ "name='Microsoft.VC90.DebugCRT' "\ "version='9.0.21022.8' "\ "processorArchitecture='amd64' "\ "\"") int main() { auto\* lib = LoadLibrary(L"msvcr90d.dll"); if (!lib) { return -1; } FreeLibrary(lib); return 0; } ``` **Background** Why do we have an external manifest file (Microsoft.VC90.DebugCRT.manifest), and an embedded one (through the pragma) for this solution? It looks like the pragma only offers a limited amount of options how your manifest can look like. One option is to reference to another manifest file which contains more information. [![enter image description here](https://i.stack.imgur.com/gnBCA.png)](https://i.stack.imgur.com/gnBCA.png) [![enter image description here](https://i.stack.imgur.com/YdLci.png)](https://i.stack.imgur.com/YdLci.png) **Edit:** The answers in this MSDN [enter link description here](https://social.msdn.microsoft.com/Forums/vstudio/en-US/523f3e64-aa12-4fcc-bb7f-3aade9b188ac/how-do-i-tell-application-manifest-file-to-use-a-dll-in-current-directory-?forum=vclanguage) thread also helped a lot. **Edit 2:** Instead of referencing from the embedded manifest to the external `Microsoft.VC90.DebugCRT.manifest`, you could already embed this one to your executable (can also be done by the Property Pages), but I came to this conclusion just too late but my initial solution hopefully gives some more insights Upvotes: 3 [selected_answer]<issue_comment>username_2: I just solved a similar problem by embedding my 3rd-party DLL with a manifest. Running depends on my 3rd-party DLL showed that it was looking for msvcr90.dll in C:\Windows\System32. Obviously, that's not the correct location - it should be looking in one of the WinSxS folders. Luckily, for my 3rd party DLL, I had the source code and original makefiles, so I was able to generate the correct manifest file and embed it in the DLL using the mt command: `mt -manifest myDLL.dll.manifest -outputresource:myDLL.dll;2` After that, I was able to load the DLL correctly and my R6034 error went away. Upvotes: 1
2018/03/16
536
1,687
<issue_start>username_0: when I run the test, I get the following error: ``` ({"Object.":function(module,exports,require,\_\_dirname,\_\_filename,global,jest){import React from 'react'; ^^^^^^ SyntaxError: Unexpected token import ``` It makes sense, since all the lines of "import React" or "import Enzyme", are marked as an error by ESlint. I don´t know why. This is my .babelrc file: ``` { "presets": [ "react", "stage-2", ["env", { "test": { "presets": ["env", "react", "stage-2"], "plugins": ["transform-export-extensions"], "only": [ "./**/*.js", "node_modules/jest-runtime" ] }, "targets": { "browsers": ["last 2 versions", "safari >= 7"] }, "modules": false }] ] } ```<issue_comment>username_1: Hello dev can you try, install babel-jest set this inside .babelrc ``` "env": { "test": { "presets": ["es2015", "react"], "plugins": ["syntax-object-rest-spread", "transform-es2015-modules-commonjs"] }, ``` [testing with jest](https://facebook.github.io/jest/docs/en/tutorial-react.html) Upvotes: 0 <issue_comment>username_2: If you are using babel 6 and jest 24 then be informed that jest 24 has dropped support for babel 6. There are two solutions 1. Upgrade your app to babel 7. (actively maintained by its developers) 2. If you don't want to upgrade, make sure that you stick on jest 23. There is one more work around if you want to use jest 24. Use `babel-jest` locked at version 23. ``` "dependencies": { "babel-core": "^6.26.3", "babel-jest": "^23.6.0", "babel-preset-env": "^1.7.0", "jest": "^24.0.0" } ``` Upvotes: 2
2018/03/16
918
3,310
<issue_start>username_0: I am using the git mergetool command to fix conflicts. However I have thousands of conflicts, is there way to simplify this so I get everything from the remote? I am asked to enter c, d or a in the command. ``` {local}: deleted {remote}: created file Use (c)reated or (d)eleted file, or (a)bort? ``` Since I have thousands of files, I don't want to keep sending c. Is there way to just do this in bulk?<issue_comment>username_1: supposing you want their change and modified yours you can do a pull as like: *git pull -X theirs* [Other stackOverflow answers](https://stackoverflow.com/questions/10697463/resolve-git-merge-conflicts-in-favor-of-their-changes-during-a-pull) [git pull -X](https://git-scm.com/docs/git-pull) [git merge strategies](https://git-scm.com/docs/merge-strategies) this link will help understand any other merge strategies for the futuro Upvotes: 0 <issue_comment>username_2: You can solve this outside of `git mergetool`: run `git status --porcelain` to get a list of all unmerged files and their states in machine-readable format. If your Git is new enough, it will support `--porcelain=v2`. See [the `git status` documentation](https://www.kernel.org/pub/software/scm/git/docs/git-status.html) for details on the output formats. Output format `v2` is generally superior for all purposes, but you should be able to make do with either one. Next, you must write a program. Unfortunately Git has no supplied programs for this. Your program can be fairly simple depending on the specific cases you want to solve, and you can use shell scripting (sh or bash) as the programming language, to keep it easy. Since you're concerned about the cases where `git mergetool` says: ``` Use (m)odified or (d)eleted file, or (a)bort? ``` you are interested in those cases where the file name is missing in the stage 1 ("base") version and also missing in the stage 2 ("local") version, but exists in the stage 3 ("remote") version. (See the `git status` documentation again and look at examples of your `git status --porcelain=v2` output to see how to detect these cases. Two of the three modes will be zero.) For those particular path names, simply run `git add` on the path name to mark the file as resolved in favor of the created file. Once you have marked all such files, you can go back to running `git mergetool` to resolve additional conflicts, if there are any. Note that your "program" can consist of running: ``` git status --porcelain=v2 > /tmp/commands.sh ``` and then editing `/tmp/commands.sh` to delete all but the lines containing files that you want to `git add`. Then change all of those lines to read `git add` where is the name of the file. Exit the editor and run `sh /tmp/commands.sh` to execute all the `git add` commands. That's your program! Upvotes: 3 [selected_answer]<issue_comment>username_3: If you want that all the change you did will be deleted and you will be sync with the remote. You should do the following: ``` git stash git pull ``` And if you want to restore the change you did you should type: ``` git stash pop ``` Basically 'git stash' is moving the change to a temp repository. you can learn more in: [NDP software:: Git Cheatsheet](http://ndpsoftware.com/git-cheatsheet.html#loc=stash;) Upvotes: 0
2018/03/16
890
3,295
<issue_start>username_0: Im a beginner in Java so sorry in advance if I dont understand certain words. I keep having the error: Cannot resolve symbol @EnableEurekaServer... When I manually type in the import line for eureka server, the word "cloud" is highlighted red in: ``` import org.springframework.cloud.netflix.eureka.server.EnableEurekaServer; ``` In my build.gradle file, I have ``` compile('org.springframework.cloud:spring-cloud-netflix-eureka-server') ``` Why does this happen... Everything looks like it's supposed to work. I can provide screenshots of things if asked! My build.gradle file looks like this: ``` buildscript { ext { springBootVersion = '2.0.0.RELEASE' } repositories { mavenCentral() } dependencies { classpath("org.springframework.boot:spring-boot-gradle-plugin:${springBootVersion}") } } apply plugin: 'java' apply plugin: 'eclipse' apply plugin: 'org.springframework.boot' apply plugin: 'io.spring.dependency-management' group = 'com.example' version = '0.0.1-SNAPSHOT' sourceCompatibility = 1.8 repositories { mavenCentral() } dependencies { compile('org.springframework.cloud:spring-cloud-netflix-eureka-server') compile('org.springframework.boot:spring-boot-starter-web') testCompile('org.springframework.boot:spring-boot-starter-test') } ``` My EurekaApplicationServer.java looks like this: ``` package com.example.eurekaserver; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; //import org.springframework.cloud.netflix.eureka.server.EnableEurekaServer; @SpringBootApplication public class EurekaApplicationServer { public static void main(String[] args) { SpringApplication.run(EurekaApplicationServer.class, args); } } ``` [picture of err](https://i.stack.imgur.com/RSTcs.png)<issue_comment>username_1: Use the dependency with the specific version, current version at the moment of this writing is: ``` compile('org.springframework.cloud:spring-cloud-netflix-eureka-server:1.4.3.RELEASE') ``` You can find the latest available version number [here](https://mvnrepository.com/artifact/org.springframework.cloud/spring-cloud-netflix-eureka-server). For Spring Boot projects when you don't specify the dependency version, the special [dependency management](https://github.com/spring-gradle-plugins/dependency-management-plugin) plug-in is used. For some reason it fails to provide the version for this specific dependency. See the [related question](https://stackoverflow.com/questions/41676534/how-are-some-gradle-dependencies-working-with-no-version-supplied). Upvotes: 3 [selected_answer]<issue_comment>username_2: There is no need to lower down the version of eureka-server/client. The reason is that the maven repo cannot resolve the latest version of eureka server.To solve this, add repository to your pom/gradle file. ``` repositories { maven { url 'https://repo.spring.io/libs-milestone' } } ``` or ``` spring-milestones Spring Milestones https://repo.spring.io/libs-milestone false ``` Upvotes: -1 <issue_comment>username_3: ``` org.springframework.cloud spring-cloud-netflix-eureka-server 1.1.6.RELEASE ``` It worked for me. Upvotes: 0
2018/03/16
806
2,925
<issue_start>username_0: ``` public class ItemStore { private Dictionary> \_items = new Dictionary>(); public void AddItem(object item) { var type = item.GetType(); if (!\_items.ContainsKey(type)) { \_items[type] = new List(); } \_items[type].Add(item); } public IEnumerable GetItems() { if(!\_items.ContainsKey(typeof(T))) { return new List(); } return \_items[typeof(T)].Cast(); } } ``` (The real scenario is more complex, and it is a library used in multiple Projects knowing nothing about the concrete types.... ) The `Cast()` in `GetItems()` is consuming quite some time. So I would prefere to avoid it. Is there any way to avoid it in C# - because actually the list already contains the correct types?<issue_comment>username_1: You need to modify the internal structure of this class a bit to not use generics in the items lookup because we need to underlying type of the stored list to be the correct type. This requires a little reflection when creating the list. We can also make AddItem and GetItems a little more efficient by avoiding extra lookups: ``` public class ItemStore { private Dictionary \_items = new Dictionary(); public void AddItem(object item) { var type = item.GetType(); IList list; if (!\_items.TryGetValue(type, out list)) { var listType = typeof(List<>).MakeGenericType(type); list = (IList)Activator.CreateInstance(listType); \_items[type] = list; } list.Add(item); } public IEnumerable GetItems() { IList list; if(!\_items.TryGetValue(typeof(T), out list)) { return Enumerable.Empty(); } else { return (IEnumerable)list; } } } ``` If you can modify the signature of AddItem this could be even simpler (no reflection), but given you've said this is an over simplification, I will leave the public API unchanged. Upvotes: 3 <issue_comment>username_2: You could make your method AddItem generic, which would allow you to store `List` instances in your dictionary (whose generic *TValue* parameter should be `IList` in this case). ``` public class ItemStore { private Dictionary \_items = new Dictionary(); public void AddItem(T item) { IList objList; if (!\_items.TryGetValue(typeof(T), out objList)) { objList = new List(); \_items[typeof(T)] = objList; } objList.Add(item); } public IEnumerable GetItems() { IList objList; return (\_items.TryGetValue(typeof(T), out objList)) ? (List)objList : new List(); } } ``` Upvotes: 2 <issue_comment>username_3: * Use IList instead of List(T) * Make AddItem() method generic ``` public class ItemStore { private Dictionary \_items = new Dictionary(); public void AddItem(T item) { var type = typeof(T); if (!\_items.ContainsKey(type)) { \_items[type] = new List(); } \_items[type].Add(item); } public IEnumerable GetItems() { if (!\_items.ContainsKey(typeof(T))) { return new List(); } return (List)\_items[typeof(T)]; } } ``` Upvotes: 3 [selected_answer]
2018/03/16
380
1,308
<issue_start>username_0: I have a query that returns following values: ``` TemplateCode Total 1418 35 7419 31 7418 31 8325 17 15623 17 4997 17 ``` I want all the rows with top 3 `Total` values In my query if I include `LIMIT 3` in the query then it gives only 3 which I don't want. I don't want to include `LIMIT` because count may vary from time to time. How can I include condition on `Total` and always get top 3 count values My current query is like: ``` select TemplateCode, count(*) as Total from table group by TemplateCode order by Total desc limit 3 ```<issue_comment>username_1: You could use a inner join on subquery for the top 3 total ``` select m.TemplateCode , m.total from my_table m inner join ( select Total from my_table order by Total Desc limit 3 ) t on t.total = m.total order by m.total, m.TemplateCode ``` Upvotes: 0 <issue_comment>username_2: I think this does what you want: ``` select t.* from t where t.Total >= (select distinct t2.Total from t t2 order by t2.Total desc limit 2, 1 ); ``` This assumes that you want the third *distinct* value. If you just want the third value, remove the `distinct`. Upvotes: 1
2018/03/16
867
3,320
<issue_start>username_0: I have a helper method to get the Display Name of an enum: ``` public static string GetDisplayName(this Enum @enum) { return @enum.GetAttribute()?.GetName(); } ``` I'm trying to make a Dictionary out of the enum names as keys, with the values as the display names using this code: ``` public static Dictionary EnumDisplayNameDictionary(this Enum @enum) { var returnDict = new Dictionary(); foreach (var item in Enum.GetValues(typeof(TEnum))) { returnDict.Add(item.ToString(), @enum.GetDisplayName()); } return returnDict; } ``` Here is the code for one of my enums (with all its WCF magic included): ``` [DataMember] [Column("cash_revenue_recognition")] public byte CashRevenueRecognitionId { get { return (byte)this.CashRevenueRecognition; } set { CashRevenueRecognition = (CashRevenueRecognitionChoices) value; } } [DataMember] [EnumDataType(typeof(CashRevenueRecognitionChoices))] public CashRevenueRecognitionChoices CashRevenueRecognition { get; set; } [DataContract] [Flags] public enum CashRevenueRecognitionChoices { [EnumMember] [Display(Name ="")] NotSet = 0, [EnumMember] [Display(Name = "When we invoice a customer for their payment")] CashAtInvoice = 1, [EnumMember] [Display(Name = "When we receive a customer payment in our bank account ")] CashAtPayment = 2 } [DataMember] public Dictionary CashEnumDictionary => CashRevenueRecognition.EnumDisplayNameDictionary(); ``` The problem I'm having is that I don't know how to get the Display Name of the specific item I am currently referencing in my loop. Actual Output: ``` { "NotSet" : "When we receive a customer payment in our bank account", "CashAtInvoice" : "When we receive a customer payment in our bank account", "CashAtPayment" : "When we receive a customer payment in our bank account" } ``` Expected Output: ``` { "NotSet" : "", "CashAtInvoice" : "When we invoice a customer for their payment", "CashAtPayment" : "When we receive a customer payment in our bank account" } ```<issue_comment>username_1: ``` foreach (var item in Enum.GetValues(typeof(TEnum))) { returnDict.Add(item.ToString(), item.GetDisplayName()); } ``` Upvotes: 0 <issue_comment>username_2: In this function, the Enum.GetValues call return a list of objects, which doesn't have a definition for GetDisplayName(). And the reason you see the same description for each one is that the @enum object is the only object you are using to get description ``` public static Dictionary EnumDisplayNameDictionary(this Enum @enum) { var returnDict = new Dictionary(); foreach (var item in Enum.GetValues(typeof(TEnum))) { returnDict.Add(item.ToString(), @enum.GetDisplayName()); } return returnDict; } ``` Simply cast item to Enum, and it will allow you to call GetDisplayName(). ``` public static Dictionary EnumDisplayNameDictionary(this Enum @enum) { var returnDict = new Dictionary(); foreach (var item in Enum.GetValues(typeof(TEnum))) { returnDict.Add(item.ToString(), ((Enum)item).GetDisplayName()); } return returnDict; } ``` With this, the @enum object is not used within this function, so this could be changed to a static function, rather than an extension function. Upvotes: 2 [selected_answer]
2018/03/16
559
1,934
<issue_start>username_0: I'm using the neo4j browser to search for a node in my graph by label. I know the node exists and the label on the node is correct, but when I specify the label on the node, neo4j can't find it. More specifically, I use the following cypher query in the neo4j browser: ``` match (a:Foo) where a.value = "Bar" return a (no changes, no records) ``` However a node with that value does exist with that label: ``` match (a) where a.value = "Bar" return labels(a) ["Foo"] ``` There is an index on that label, but I don't know if that is relevant. ``` :schema Indexes ON :Foo(value) ONLINE ``` Additionally, explicitly resetting the label doesn't seem to work: ``` match (a) where a.value = "Bar" set a :Foo return a ``` Will return the node with the correct label applied, but when I retry my original query it still cannot find the node. This has worked previously, but I have made some changes recently to our heap size/ page cache size /gc type trying to run down another issue with 100% cpu hangs on garbage collection. None of those changes should have affected labels. Has anyone had this experience before? I'm using Neo4j CE v3.3.0<issue_comment>username_1: Looks like an issue of index corruption, as mentioned in my comment, dropping then creating the index again should fix it. As to how the index got corrupted, I can't say, but you are using a .0 release, and those tend to have more bugs than others. You may want to upgrade to the latest 3.3.x release. You can always run the consistency check via neo4j-admin to see if anything else might be going on with your graph data. Upvotes: 2 [selected_answer]<issue_comment>username_2: This did not work ``` MATCH ( p:item {name:"spam"}) RETURN p.name ``` (no changes, no records) But this did work: ``` Match (n:item) Where n.name =~ '.*spam.*' Return n.name, n.purchase n.name n.purchase """spam""" 8 ``` Upvotes: 0
2018/03/16
363
1,332
<issue_start>username_0: I know that Jquery can do: ``` $('#div').on('click', function() { //do something }); ``` What I want to do is check for a click without calling a function in a conditional, something like this: ``` if ($('#div').on('click') && //another conditional) { //do something } ``` Without the function call. How would I do this?<issue_comment>username_1: One way you can do this is to just check the condition *inside* your function: ``` $('#div').on('click', function() { if(/*condition*/) { //do something } }); ``` This is the most straightforward way to do this in my opinion. Also, if you want to use the same element for the conditional statement, you could use `$(this)` inside the function as the selector. Upvotes: 1 <issue_comment>username_2: The jQuery on hover function does not return a boolean value, only triggers the function that is passed by the second parameter (the first parameter is the action that triggers the function like click). The only way to use a condition is to use it inside the function: ``` $('#div').on('click', function() { if(...) { ... } }); ``` You can see the same answer in a similar question: [How to do "If Clicked Else .."](https://stackoverflow.com/questions/6629297/how-to-do-if-clicked-else/6629427) Upvotes: 0
2018/03/16
571
2,004
<issue_start>username_0: I am new to cordova. Trying to work out with it but when I check requirements it says Android Target not installed while I have installed SDK in Android Studio. Required information for troubleshooting can be found [here](https://www.pastebin.com/ALd4Bizw). I am using Windows 8.1 32 bit installation. I have already tried previous answers found oh the site. ### Update I just noticed that running `cordova build android` gives ``` D:\Android\Apps\hell>cordova platform ls Installed platforms: android 7.0.0 Available platforms: browser ~5.0.1 ios ~4.5.4 osx ~4.0.1 windows ~5.0.0 www ^3.12.0 ``` Indicating android version to be 7.0.0 thus API is 24. But `D:\Android\Apps\hell\platforms\android\CordovaLib\project.properties` have : ``` # Indicates whether an apk should be generated for each density. split.density=false # Project target. target=android-26 apk-configurations= renderscript.opt.level=O0 android.library=true ``` Here target is set to android-26, which is Android 8.0<issue_comment>username_1: One way you can do this is to just check the condition *inside* your function: ``` $('#div').on('click', function() { if(/*condition*/) { //do something } }); ``` This is the most straightforward way to do this in my opinion. Also, if you want to use the same element for the conditional statement, you could use `$(this)` inside the function as the selector. Upvotes: 1 <issue_comment>username_2: The jQuery on hover function does not return a boolean value, only triggers the function that is passed by the second parameter (the first parameter is the action that triggers the function like click). The only way to use a condition is to use it inside the function: ``` $('#div').on('click', function() { if(...) { ... } }); ``` You can see the same answer in a similar question: [How to do "If Clicked Else .."](https://stackoverflow.com/questions/6629297/how-to-do-if-clicked-else/6629427) Upvotes: 0
2018/03/16
1,469
3,333
<issue_start>username_0: Assume I have the following list: ``` list(c(1:5,NA,NA),NA,c(NA,6:10)) [[1]] [1] 1 2 3 4 5 NA NA [[2]] [1] NA [[3]] [1] NA 6 7 8 9 10 ``` I want to replace all `NA`s with `0`: ``` [[1]] [1] 1 2 3 4 5 0 0 [[2]] [1] 0 [[3]] [1] 0 6 7 8 9 10 ``` I was originally thinking `is.na` would be involved, but couldn't get it to affect all list elements. I learned from the related question ([Remove NA from list of lists](https://stackoverflow.com/q/25777104/4581200)), that using `lapply` would allow me to apply `is.na` to each element, but that post demonstrates how to remove (not *replace*) `NA` values. **How do I *replace* `NA` values from *multiple* list elements?** I've tried `for` loops and `ifelse` approaches, but everything I've tried is either slow, doesn't work or just plain clunky. There's got to be a simple way to do this with an `apply` function...<issue_comment>username_1: And there *is*! Here's a simple `lapply` approach using the `replace` function: ``` L1 <-list(c(1:5,NA,NA),NA,c(NA,6:10)) lapply(L1, function(x) replace(x,is.na(x),0)) ``` With the desired result: ``` [[1]] [1] 1 2 3 4 5 0 0 [[2]] [1] 0 [[3]] [1] 0 6 7 8 9 10 ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: There are multiple ways to do this: using `map` from purrrr package. ``` lt <- list(c(1:5,NA,NA),NA,c(NA,6:10)) lt %>% map(~replace(., is.na(.), 0)) #output [[1]] [1] 1 2 3 4 5 0 0 [[2]] [1] 0 [[3]] [1] 0 6 7 8 9 10 ``` Upvotes: 2 <issue_comment>username_3: ``` kk<- list(c(1:5,NA,NA),NA,c(1,6:10)) lapply(kk, function(i) { p<- which(is.na(i)==TRUE) i[p] <- 0 i }) ``` Edited upon Gregor's commment ``` lapply(kk, function(i) {i[is.na(i)] <- 0; i}) ``` Upvotes: 1 <issue_comment>username_4: Try this: ``` lapply(enlist, function(x) { x[!is.na(x)]}) ``` where: ``` enlist <- list(c(1:5,NA,NA),NA,c(NA,6:10)) ``` This yields: ``` [[1]] [1] 1 2 3 4 5 [[2]] logical(0) [[3]] [1] 6 7 8 9 10 ``` Upvotes: -1 <issue_comment>username_1: I've decided to **benchmark the various `lapply` approaches mentioned**: ``` lapply(Lt, function(x) replace(x,is.na(x),0)) lapply(Lt, function(x) {x[is.na(x)] <- 0; x}) lapply(Lt, function(x) ifelse(is.na(x), 0, x)) ``` Benchmarking code: ``` Lt <- lapply(1:10000, function(x) sample(c(1:10000,rep(NA,1000))) ) ##Sample list elapsed.time <- data.frame( m1 = mean(replicate(25,system.time(lapply(Lt, function(x) replace(x,is.na(x),0)))[3])), m2 = mean(replicate(25,system.time(lapply(Lt, function(x) {x[is.na(x)] <- 0; x}))[3])), m3 = mean(replicate(25,system.time(lapply(Lt, function(x) ifelse(is.na(x), 0, x)))[3])) ) ``` Results: ``` Function Average Elapsed Time lapply(Lt, function(x) replace(x,is.na(x),0)) 0.8684 lapply(Lt, function(x) {x[is.na(x)] <- 0; x}) 0.8936 lapply(Lt, function(x) ifelse(is.na(x), 0, x)) 8.3176 ``` **The `replace` approach is fastest followed closely by the `[]` approach. The `ifelse` approach is 10x slower.** Upvotes: 1 <issue_comment>username_5: This will deal with any list depth and structure: ``` x <- eval(parse(text=gsub("NA","0",capture.output(dput(a))))) # [[1]] # [1] 1 2 3 4 5 0 0 # # [[2]] # [1] 0 # # [[3]] # [1] 0 6 7 8 9 10 ``` Upvotes: 0
2018/03/16
1,517
5,273
<issue_start>username_0: I have a react app that uses react-router with a Router that looks like: ``` ``` The app is hosted using Firebase hosting. If I open my website and click around such that the router takes me to /map/uid, it loads fine. But if I close the browser and then try to explicitly navigate to `/map/`, I get a "no page found" page from Firebase. This is my first time using react-router. [![enter image description here](https://i.stack.imgur.com/2Ws5w.png)](https://i.stack.imgur.com/2Ws5w.png) **Update #1** I have updated my `firebase.json` to: ``` { "hosting": { "public": "build", "ignore": [ "firebase.json", "**/.*", "**/node_modules/**" ], "rewrites": [ { "source": "**", "destination": "/index.html" } ] } } ``` I no longer receive the "Page not found" page; however, the content of my react app never loads, and I notice in my console an error: ``` Uncaught SyntaxError: Unexpected token < ``` **Update #2** I now understand why I am getting the error. Looking at the Sources tab in Chrome dev tools, my static/ folder has a strange name (`static/css` rather than `static`) only when I directly navigate to `/map/{known uid}`. When I navigate to the home page, all loads fine. This explains the error. I am still not sure how to fix.<issue_comment>username_1: What about specyfying the `basename` in `Router`? Something along this: ``` ``` Upvotes: -1 <issue_comment>username_2: For me I could see the root url but other routes like "/pricing" were giving me a 404. I added this to my firebase.json file and now it works. Also make sure firebase/auth is allowed to work on the domain. There is a setting in the auth section of firebase. ``` "rewrites": [ { "source": "**", "destination": "/index.html" }], ``` My full firebase.json ``` { "firestore": { "rules": "firestore.rules", "indexes": "firestore.indexes.json" }, "hosting": { "public": "build", "rewrites": [ { "source": "**", "destination": "/index.html" }], "ignore": [ "firebase.json", "**/.*", "**/node_modules/**" ] }, "storage": { "rules": "storage.rules" } } ``` Upvotes: 6 <issue_comment>username_3: Late to the party, but try removing the "homepage" key from your package.json (or making sure that it is correct relative to where the homepage is stored. Upvotes: 2 <issue_comment>username_4: Late answer, but I'm facing with the same issue. I solved this with 2 steps: 1. update firebase.json like this ```js { "hosting": { "site": "myproject", "public": "build", "ignore": [ "firebase.json", "**/.*", "**/node_modules/**" ], "rewrites": [ { "source": "**", "destination": "/index.html" } ] } } ``` 2. set the base url in index.html like this ```html . . . ``` Upvotes: 3 <issue_comment>username_5: Late answer, but it solved the problem for me: When doing `firebase init` it will ask whether it will be Single Page App or no. Default answer is `No`, however, just choose `Yes` and it will work. Upvotes: 2 <issue_comment>username_6: You're getting this error because of client-side routing. (Deep inside React) 1. When building the react app in the build(folder) you see only one index.html file. 2. when you hit URL with YOUR\_DOMAIN/map Now firebase is trying fetch **build->map->index.html** but is present in your build folder. So you can do 1. Are you can use react-router-dom. After building application build folder , index.html you can mention . Upvotes: 1 <issue_comment>username_7: Try setting the `cleanUrls` property to `true`. See [Firebase docs](https://firebase.google.com/docs/hosting/full-config#control_html_extensions) for more info ```json "hosting": { // ... // Drops `.html` from uploaded URLs "cleanUrls": true } ``` Upvotes: 0 <issue_comment>username_8: In my case the issue was different. I was able to see the index page correctly but when I was navigating to a new route I was seeing a blank page. Firebase was configured correctly with: ``` "rewrites": [ { "source": "**", "destination": "/index.html" } ] ``` After the deploy I went in the request Tab and seen this. [![enter image description here](https://i.stack.imgur.com/XnblO.png)](https://i.stack.imgur.com/XnblO.png) I then checked the `index.html` file and noticed this: [![enter image description here](https://i.stack.imgur.com/Go1N0.png)](https://i.stack.imgur.com/Go1N0.png) Since my configuration builds everything in the same `dist` directory, ``` dist/index.html dist/main.bundle.js ... ``` the issue was straight forward. I had to tell webpack were my assets were placed relatively to the `index.html` file by adjusting the webpack configuration like so: ``` { ... output: { ... publicPath: '/', ... } ... } ``` now the files get required like so: [![enter image description here](https://i.stack.imgur.com/mHYB3.png)](https://i.stack.imgur.com/mHYB3.png) This will tell the browser to look into the root, where the `index.html` is placed. Make sure you change the production configuration if you have multiple configurations and you are not extending them. Upvotes: 0
2018/03/16
1,193
4,092
<issue_start>username_0: If I have a number that is 1,234.56, I need to format with no decimal, no comma & leading zeros. The output I'm seeking would be 000123456.<issue_comment>username_1: What about specyfying the `basename` in `Router`? Something along this: ``` ``` Upvotes: -1 <issue_comment>username_2: For me I could see the root url but other routes like "/pricing" were giving me a 404. I added this to my firebase.json file and now it works. Also make sure firebase/auth is allowed to work on the domain. There is a setting in the auth section of firebase. ``` "rewrites": [ { "source": "**", "destination": "/index.html" }], ``` My full firebase.json ``` { "firestore": { "rules": "firestore.rules", "indexes": "firestore.indexes.json" }, "hosting": { "public": "build", "rewrites": [ { "source": "**", "destination": "/index.html" }], "ignore": [ "firebase.json", "**/.*", "**/node_modules/**" ] }, "storage": { "rules": "storage.rules" } } ``` Upvotes: 6 <issue_comment>username_3: Late to the party, but try removing the "homepage" key from your package.json (or making sure that it is correct relative to where the homepage is stored. Upvotes: 2 <issue_comment>username_4: Late answer, but I'm facing with the same issue. I solved this with 2 steps: 1. update firebase.json like this ```js { "hosting": { "site": "myproject", "public": "build", "ignore": [ "firebase.json", "**/.*", "**/node_modules/**" ], "rewrites": [ { "source": "**", "destination": "/index.html" } ] } } ``` 2. set the base url in index.html like this ```html . . . ``` Upvotes: 3 <issue_comment>username_5: Late answer, but it solved the problem for me: When doing `firebase init` it will ask whether it will be Single Page App or no. Default answer is `No`, however, just choose `Yes` and it will work. Upvotes: 2 <issue_comment>username_6: You're getting this error because of client-side routing. (Deep inside React) 1. When building the react app in the build(folder) you see only one index.html file. 2. when you hit URL with YOUR\_DOMAIN/map Now firebase is trying fetch **build->map->index.html** but is present in your build folder. So you can do 1. Are you can use react-router-dom. After building application build folder , index.html you can mention . Upvotes: 1 <issue_comment>username_7: Try setting the `cleanUrls` property to `true`. See [Firebase docs](https://firebase.google.com/docs/hosting/full-config#control_html_extensions) for more info ```json "hosting": { // ... // Drops `.html` from uploaded URLs "cleanUrls": true } ``` Upvotes: 0 <issue_comment>username_8: In my case the issue was different. I was able to see the index page correctly but when I was navigating to a new route I was seeing a blank page. Firebase was configured correctly with: ``` "rewrites": [ { "source": "**", "destination": "/index.html" } ] ``` After the deploy I went in the request Tab and seen this. [![enter image description here](https://i.stack.imgur.com/XnblO.png)](https://i.stack.imgur.com/XnblO.png) I then checked the `index.html` file and noticed this: [![enter image description here](https://i.stack.imgur.com/Go1N0.png)](https://i.stack.imgur.com/Go1N0.png) Since my configuration builds everything in the same `dist` directory, ``` dist/index.html dist/main.bundle.js ... ``` the issue was straight forward. I had to tell webpack were my assets were placed relatively to the `index.html` file by adjusting the webpack configuration like so: ``` { ... output: { ... publicPath: '/', ... } ... } ``` now the files get required like so: [![enter image description here](https://i.stack.imgur.com/mHYB3.png)](https://i.stack.imgur.com/mHYB3.png) This will tell the browser to look into the root, where the `index.html` is placed. Make sure you change the production configuration if you have multiple configurations and you are not extending them. Upvotes: 0
2018/03/16
985
3,188
<issue_start>username_0: I am trying to get **cumulative** counts of users and AUM by month. However, a user can have multiple goals, which correspond to accounts, and accounts can have multiple positions. <http://sqlfiddle.com/#!15/f3c59/2> As you can see, this doesn't work because user 1 is in 2/1/18 with 300 AUM and then again counts in 3/1/18 with 400 AUM. So cumulatively, we are saying there are 3 users (instead of 3 goals), when there are actually only 2 users. Is there a way to present this independent of monthly grouping? So 2/1/18 there are 2 clients with 500 AUM, and 3/1/18 there are still 2 clients but with 900 AUM? Expected output: ``` month users AUM 2/1/18 2 500 3/1/18 2 900 ```<issue_comment>username_1: I actually have a solution for you that is working but it's very, very slow!! ``` select distinct date_trunc('month', a.date), (select count(distinct g.user_id) from goals g join accounts ac on ac.goal_id = g.id where date_trunc('month', ac.date) <= date_trunc('month', a.date) and ac.status = 'current'), (select sum(p.amount) from positions p join accounts acc on acc.id = p.account_id where date_trunc('month', acc.date) <= date_trunc('month', a.date) and acc.status='current') from accounts a where a.status = 'current' ``` I really can not see a way to do this without subqueries. In fact I think, that your data model is flawed. It took me a lot of time to get the relations right and after some time I realised, that you do not even need the "goals" and "users" tables for your purpose. I actually have no good clue, how to improve your data model without knowing the whole database structure. My first step would be to store a separate value in your "accounts" table, which indicates the time slot, where the positions have to be summed up. I also thought about moving the "date" value to a separate table, but then again I don't know the whole database - so these are just simple ideas. Upvotes: 2 [selected_answer]<issue_comment>username_2: I think I've got what you need (kinda!) IF I've got it right, the correct answer on the sample will be : ``` | month | users | amount | |----------------------|-------|--------| | 2018-02-01T00:00:00Z | 2 | 500 | | 2018-03-01T00:00:00Z | 3 | 900 | ``` there will be 1 user in month 3, and another 2 cumulative users from month 2 (previous month). the user that in month 3 has $400, so when we add the previous month amount to it it'll be $900 ( 400 + 500) ... if this is what you're looking for, then this is the query for it : ``` SELECT month, SUM(users) OVER(ORDER BY month ASC) AS users, SUM(AUM) OVER(ORDER BY month ASC) AS amount FROM ( select date_trunc('month',date) as month, count(distinct u.id) users, sum(amount) AUM from users u join goals g on u.id=g.user_id join accounts s on g.id=s.goal_id join positions p on s.id=p.account_id where status='current' group by 1 ) D ``` I hope this is what you're seeking! [SQL Fiddle](http://sqlfiddle.com/#!15/f3c59/8) Upvotes: 0
2018/03/16
287
1,035
<issue_start>username_0: I am using Font Awesome icons in a page I am developing, and while the `fas` icons work fine, the `fab` icons do not. Instead, with the `fab` icons, I get a blank square. I currently have both the Font Awesome CDN link and a link to the local `fontawesome-all.min.css` file. I have disabled AdBlock Plus, and I am using `http-server` as my development server. Here is a copy of the head and first section of my page. ``` Download ![banner](images/banner.png) phone crasher App =================== Lorem ipsum dolor sit amet, consectetur adipisicing elit. [Apple Store](#) [Google Store](#) ` ```<issue_comment>username_1: It looks like you are missing the import for the brands.js file ``` ``` Please read their [docs](https://fontawesome.com/get-started/svg-with-js) (hint: click the "Brands" pill to include it) Upvotes: 1 <issue_comment>username_2: don't add the twice sometimes causes problems. and use this ``` ``` <https://fontawesome.com/get-started/web-fonts-with-css> Upvotes: 0
2018/03/16
586
2,232
<issue_start>username_0: Suppose I have a method that issues a `POST` and then calls an appropriate callback function in order to handle the response: ``` myService.verify(id, verificationCallback); function verificationCallback(err, response) { ... } ``` Maybe this question is two-fold. There seems to be 2 implicit arguments being passed to `verificationCallback` (is this the case? how does this work?) How would I then be able to pass in a third argument to that callback? Would I do something like: ``` myService.verify(id, verificationCallback(err, response, someOtherArgument)); ``` Would this break because there are no `err` and `response` variables in the current context? Would I access these variables using the `arguments` object? **Possible Solution (?)** ``` Using an anonymous function: myService.verify(id, function(err, response) { // Access my other variable here someOtherArgument === ... }); ``` Thanks<issue_comment>username_1: I wouldnt call that *implicit*: ``` const myService = { verify(id, cb){ //... cb(null, "data"); // <---- } }; ``` Upvotes: 0 <issue_comment>username_2: ``` myService.verify(id, verificationCallback(err, response, someOtherArgument)); ``` This would not work. It would call the function with (most likely) undefined variables immediately. The arguments are not passed implicitly, but explicitly when the function is invocated inside the verify function. See JonasW's anwer. Here is a possible solution: ``` function callback(yourThirdArgument) { return function(err, response) { ... } } ``` Usage: ``` myService.verify(id, callback(someOtherArgument)); ``` Upvotes: 1 <issue_comment>username_3: You could use `.bind()`. Attach null to the `this` value and `someOtherArgument` will be passed as the first argument to your callback. Here's an [MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Function/bind) link for more info. ``` const someOtherArgument = ""; // Use .bind() to attach an argument to your callback. myService.verify(id, verificationCallback.bind(null, someOtherArgument)); function verificationCallback(someOtherArgument, err, response) { ... } ``` Upvotes: 1
2018/03/16
623
2,241
<issue_start>username_0: I have a dataset that contains 81 columns with person id, 79 binary variables and a cost variable: ``` id h1 h2 h3 ... h79 cost 1 1 0 1 1 15 2 1 1 1 1 80 3 0 1 1 0 10 ... ``` each person id have one row of records. Now I want to choose which of the two h(binary) variables have more than 50 unique person id. if then do calculate their total cost. I guess a good way to approach it is to create a array with all h variables and use two DO LOOPS? But what if I want to see a group of three variables or maybe four or five? And also how am I going to store the combination of variable names so I could know this combination of variables has this amount of total cost. So I think the final output is going to looks like this: ``` combinations total cost h1&h3 95 h2&h3 90 h1&h2&h3. 80 ``` thank you for your help!<issue_comment>username_1: I wouldnt call that *implicit*: ``` const myService = { verify(id, cb){ //... cb(null, "data"); // <---- } }; ``` Upvotes: 0 <issue_comment>username_2: ``` myService.verify(id, verificationCallback(err, response, someOtherArgument)); ``` This would not work. It would call the function with (most likely) undefined variables immediately. The arguments are not passed implicitly, but explicitly when the function is invocated inside the verify function. See JonasW's anwer. Here is a possible solution: ``` function callback(yourThirdArgument) { return function(err, response) { ... } } ``` Usage: ``` myService.verify(id, callback(someOtherArgument)); ``` Upvotes: 1 <issue_comment>username_3: You could use `.bind()`. Attach null to the `this` value and `someOtherArgument` will be passed as the first argument to your callback. Here's an [MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Function/bind) link for more info. ``` const someOtherArgument = ""; // Use .bind() to attach an argument to your callback. myService.verify(id, verificationCallback.bind(null, someOtherArgument)); function verificationCallback(someOtherArgument, err, response) { ... } ``` Upvotes: 1
2018/03/16
554
1,997
<issue_start>username_0: I am using pyodbc to create or drop table. And I wrote a function to drop table if table has already existed. Please look at my syntax below ``` try: cur.execute('Drop table {}'.format(table_name)) except ProgrammingError: cur.execute(create_table) ``` However, I got error message : ``` in upload\_to\_SQL(file\_name, skiprows, table\_name) 26 try: 27 cur.execute('Drop table {}'.format(table\_name)) --->28 except ProgrammingError: 29 cur.execute(create\_table) 30 NameError: name 'ProgrammingError' is not defined ``` I confirm that **ProgrammingError** is the error message if I drop a table didn't exist in the sql server. Anyone have idea how to revise this?<issue_comment>username_1: I wouldnt call that *implicit*: ``` const myService = { verify(id, cb){ //... cb(null, "data"); // <---- } }; ``` Upvotes: 0 <issue_comment>username_2: ``` myService.verify(id, verificationCallback(err, response, someOtherArgument)); ``` This would not work. It would call the function with (most likely) undefined variables immediately. The arguments are not passed implicitly, but explicitly when the function is invocated inside the verify function. See JonasW's anwer. Here is a possible solution: ``` function callback(yourThirdArgument) { return function(err, response) { ... } } ``` Usage: ``` myService.verify(id, callback(someOtherArgument)); ``` Upvotes: 1 <issue_comment>username_3: You could use `.bind()`. Attach null to the `this` value and `someOtherArgument` will be passed as the first argument to your callback. Here's an [MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Function/bind) link for more info. ``` const someOtherArgument = ""; // Use .bind() to attach an argument to your callback. myService.verify(id, verificationCallback.bind(null, someOtherArgument)); function verificationCallback(someOtherArgument, err, response) { ... } ``` Upvotes: 1
2018/03/16
1,164
4,614
<issue_start>username_0: In Swift, we can create a custom UICollectionViewCell as follows: `class MyCell : UICollectionViewCell {}` Next, if we have the method: `override func collectionView(_ collectionView: UICollectionView, cellForItemAt indexPath: IndexPath) -> UICollectionViewCell {}` which expects us to return a `UICollectionViewCell`, we can return an instance of `MyCell` without any issue. In F#, we must first do the following cast: `thisCell :> UICollectionViewCell` or else F# will complain that we are returning an object of the wrong type. I understand that this is an issue in F# related to type inference, as described here: [Why isn't up-casting automatic in f#](https://stackoverflow.com/questions/16807839/why-isnt-up-casting-automatic-in-f) Why is this not an issue for Swift, which also uses type inference?<issue_comment>username_1: It's Swift's object-oriented design. I don't know how things work in F#, but that's not the question here. The inheritance model of Swift is quite plain and similar to real-life: Any subclass `B`of a class `A` is also an instance of the class `A`. Same as in real-life: You're basically a `Mammal`. Consider this the base class for this (quite odd) example. But you're also an instance of type `Human`. And being a `Human` doesn't mean you're not a `Mammal` anymore: ``` Mammal |--- Human |--- Man |--- Woman ``` Now take that concept to programming with Swift: ``` UICollectionViewCell |--- MyCell |--- AnotherCell ``` It's now perfectly fine that `MyCell(...) is UICollectionViewCell` equals `true` and a `MyCell` instance is accepted as a return value considering the above model. Just don't confuse between an object being an instance of a class and this class being its "most precise / downstream" type. Upvotes: 0 <issue_comment>username_2: The reason is simple: Swift supports Object Oriented Programming, and thus it honours one of the OOP principles - polymorphism. And this is exactly what a `UICollectionView` does: in treats all `UICollectionViewCell` instances and it subclasses instances the same way, referring to the base class when doing operations. Since a subclass inherits all members of the parent class, it's logical to be able to use it wherever the superclass is needed. Note that this feature is unrelated to type inference, the ability of using a subclass instance where a superclass is related to type compatibility, not type inference. Upvotes: 0 <issue_comment>username_3: That question you reference appears to be having trouble in F# inferring the correct type of some property merely through through the declaration of conformance to some interface. `IThings` is an interface to which `FSharpThings` conforms. It strikes me that the equivalent in Swift would be: ``` class A { } class B: A { } protocol MyProtocol { var value: A { get set } } class Test: MyProtocol { var value = B() } ``` This results in a warning that `Test` does not conform to the protocol. You can see the problem here: What type should the compiler infer for `value`? If it did the standard inference, it would infer that `value` in `Test` is a `B` and that `Test` should feel free to call `B` methods. But what if something else tried to change `value` to some `A` instance, like the protocol says it can: Then `Test` methods calling `B` methods on `value` would fail. The other potential alternative would be for the compiler to infer that `value` is of type `A`. This would be complicated for the compiler and, worse, confusing for the programmer writing code for `Test`. Any programmer glancing at the code would be confused why they can't call `B` methods because they'd logically assume it was type `B` on the basis of the declaration. It would be a mess. So, the compiler is going to require you to explicitly declare it as type `A`. One solution is to manually upcast to eliminate any ambiguity: ``` class Test: MyProtocol { var value = B() as A } ``` Or, more natural in Swift, one could explicitly declare the type to conform to the protocol: ``` class Test: MyProtocol { var value: A = B() } ``` The `cellForItemAt` situation is different. You are not trying to have the compiler infer the type of some property. It knows precisely what `cellForItemAt` returns, and all the compiler cares about is whether whatever is returned by this method is a `UICollectionViewCell`. It, quite correctly, doesn't care if it's a `UICollectionViewCell` or a subclass of that. Bottom line, the `cellForItemAt` scenario is very different than the one presented in that other question. Upvotes: 1
2018/03/16
509
2,143
<issue_start>username_0: I have been reading recently about a new technology called Google Flutter, which is used to develop mobile apps(Android/iOS) with a programming language called Dart. All that being said, do I have to learn Dart as a strong prerequisite to build apps with flutter(which makes sense) or I will learn Dart by applying and using flutter components(I kinda learned React and it's conventions plus semantics by developing React Native applications so is it the same story here?).<issue_comment>username_1: > > Do I have to learn Dart before starting learning Flutter? > > > No. Dart is easy and purposefully similar to java/JS/c#. If you know one of these, you won't be lost here. As for flutter's widgets, it's quite similar to React but easier. Upvotes: 5 [selected_answer]<issue_comment>username_2: I'm a C# dev, and I got up and running pretty quickly. To me, it seems like a C# with somewhat less ceremony when building stuff, but it feels very, *very* similar. Upvotes: 2 <issue_comment>username_3: Really Not,Dart is such a simple and easy language that it inherits almost everything fron java and Js,If you know only one of them then you are ready to rock with flutter. Me and almost all Flutter developer learnt Flutter first and Dart subsequently and automatically without informing themselves. And trust me the language doesnt matter, if you have hold on any language except speaking one,then you are gonna change your World. Flutter is that awesome, and thats why we love Flutter. Upvotes: 2 <issue_comment>username_4: Without learning Dart don't move to flutter If you have some programming experince then it will be easy to learn dart and after that you can move in flutter. Upvotes: 0 <issue_comment>username_5: Yes you have to, If you don't know any other programming languages like C, Kotlin. If you already have experience in, you may start flutter. But Dart programing language is mandatory. Go throw the official document for better understanding. <https://dart.dev/guides/language/language-tour> Upvotes: 1 <issue_comment>username_6: Must be before Flutter Learn darts completely Upvotes: -1
2018/03/16
1,622
5,248
<issue_start>username_0: I have a dataframe in R that includes a column for distance (12th column) and whether or not there is a match at that distance (13th column). 1 represents a match, 0 represents no match. For example: ``` distance match 1 0 1 1 1 1 2 1 2 0 3 1 4 0 4 0 ``` I want to find the frequency of each of the distance values and find the percentage of matches at each of those values. For example, for the table above, I want to get something like this: ``` distance frequency matches 1 3 2 2 2 1 3 1 1 4 2 0 ``` The current code I have looks like this: ``` #Create a new list with unique distance values distance <- unique(methyl_data[,12]) #Count how many of each distance and how many matches there are total = c() matches = c() dl = length(distance) ml = length(methyl_data[,12]) match = FALSE tcounter = 0 mcounter = 0 for (d in 1:dl) { for (x in 1:ml){ if (distance[d] == methyl_data[x, 12]) { match = TRUE tcounter <- tcounter + 1 if (methyl_data[x, 13] == 1) { mcounter <- mcounter + 1 } } #Add the frequency and number of matches for the last data entry if(d== dl && x ==ml) { total = c(total, tcounter) matches = c(matches, mcounter) } if((distance[d] != methyl_data[x, 12]) && match == TRUE) { match = FALSE total = c(total, tcounter) matches = c(matches, mcounter) tcounter =0 mcounter =0 } } } View(distance) #Create a table with the frequency of distances and matches and percentage of matches percentage = (matches/total) table = cbind(distance, total, matches, percentage) ``` However, this dataframe has almost 2 million rows, so this loop is inefficent. Is there any built-in R function that can simplify my code? My ultimate goal is to see if there is a relationship between distance and matches, so is there a simpler way to do that for a very large dataset? Thanks in advance.<issue_comment>username_1: There are multiple ways to do this. **Method 1:** Using `dplyr` package: ``` library(dplyr) df %>% group_by(distance) %>% mutate(frequency = n(), matches = sum(match)) %>% select(distance, frequency, matches) %>% distinct() print(df) distance frequency matches 1 1 3 2 2 2 2 1 3 3 1 1 4 4 2 0 ``` **Method 2:** Using `data.table` package (prefer this if your data is huge) ``` library(data.table) setDT(df) df[,':='(frequency = .N, matches = sum(match)), .(distance)] df <- unique(df[,.(distance, frequency, matches)]) print(df) distance frequency matches 1: 1 3 2 2: 2 2 1 3: 3 1 1 4: 4 2 0 ``` Upvotes: 2 <issue_comment>username_2: This is good case for using the `dplyr` package: ``` > dplyr::group_by(df, distance) %>% dplyr::summarise(frequency = n(), matches = sum(match)) # A tibble: 4 x 3 distance frequency matches 1 1 3 2 2 2 2 1 3 3 1 1 4 4 2 0 ``` Upvotes: 1 <issue_comment>username_3: Consider your data.frame is `df`. Here you have some alternatives to choose from. With R base: **Alternative 1** with two `tapply`s ``` data.frame(distance=unique(df$distance), frequency=with(df, tapply(match, distance, length)), matches=with(df, tapply(match, distance, sum))) ``` **Alernative 2** with one `tapply` ``` do.call(rbind, tapply(df$match, df$distance, function(x){ c(match=length(x), frequency=sum(x))} )) cbind(distance=unique(df$distance), out) ``` **Alternative 3** Using `xtabs` and `table` ``` x <- xtabs(match ~ distance, data=df) out <- cbind(as.data.frame.table(x), frequency=as.data.frame.table(table(df$distance))[,2]) names(out)[2] <- "matches" ``` **Alternative 4** Using `aggregate` ``` tmp <- do.call(data.frame, aggregate(.~distance, FUN=function(x){ c(match=length(x), frequency=sum(x)) }, data=df )) names(tmp)[-1] <- c("frequency", "matches") ``` **Alternative 5** Using `rle` and `tapply` ``` x <- rle(df$distance) data.frame(distance=x$values, frequency=x$lengths, matches=tapply(df$match, df$distance, sum)) ``` Upvotes: 2 <issue_comment>username_4: ``` df <- data.frame(distance = c(1,1,1,2,2,3,4,4),match=c(0,1,1,1,0,1,0,0)) df<- split(df,df$distance) distance <- names(df) frequency <- unlist(lapply(df,function(i) length(i$match))) matches <- unlist(lapply(df,function(i) sum(i$match))) res <- data.frame(distance, frequency, matches) ``` Note : Upvotes: 0 <issue_comment>username_5: It isn't elegant, but how about this (assuming your dataframe is `df`): ``` num_matches <- c() for(i in unique(df$distance)){ num_matches[i] <- sum(df$match[df$distance == i]) } new_df <- data.frame("distance" = unique(df$distance), "frequency" = as.vector(table(df$distance)), "matches" = num_matches) ``` Upvotes: 0
2018/03/16
742
2,628
<issue_start>username_0: I know C++11 provides [alignof](http://en.cppreference.com/w/cpp/language/alignof), [alignas](http://en.cppreference.com/w/cpp/language/alignas) and [align](http://en.cppreference.com/w/cpp/memory/align), but in this case I want to check an input buffer that is already allocated. I know also that C provides `uintptr_t` to precisely fit a pointer type in a conversion to integer (and then checking alignment would be easy), but this data type is [not guaranteed to be there in C++/C++11](https://stackoverflow.com/a/1846648/2436175). The question is answered [here](https://stackoverflow.com/q/6240330/2436175) for C. It seems that a conversion to any integer would be ok in this case, but in C++ I get a "loses precision" warning. So, now I look at [Converting a pointer into an integer](https://stackoverflow.com/questions/153065/converting-a-pointer-into-an-integer), but there I find an abundant use of `uintptr_t`, which is not guaranteed to be there. So, what is the best way to check if an input pointer is aligned in C++/C++11? (Note: After all this research and reasoning I came up with a solution, but I am looking forward to other proposals!)<issue_comment>username_1: This is the solution I am using right now with `uintmax_t` ``` #include template bool isAlignedAs(void\* in) { return !(reinterpret\_cast (in) % alignof(T)); } ``` Tested [here](http://coliru.stacked-crooked.com/a/81ad841405737782). --- To confirm 100% that there won't be lost precision warning generated, one could add anywhere in the code an assertion: ``` static_assert(sizeof (uintmax_t) >= sizeof (void *) , "No suitable integer type for conversion from pointer type"); ``` Upvotes: 0 <issue_comment>username_2: `uintptr_t` is the right type to inspect the numeric value within a pointer. If that doesn't exist, it means that there is no integral type big enough for the whole pointer. However, alignment only affects the low bits, so it's not actually necessary to store the entire value. `size_t` should always be suitable for capturing the bits related to alignment. (In particular, this is the result type of `alignof` so if it isn't sufficient, the language's own alignment logic will break) From the Standard, section `[basic.align]`: > > **Alignments are represented as values of the type `std::size_t`**. Valid alignments include only those values returned by an `alignof` expression for the fundamental types plus an additional implementation-defined set of values, which may be empty. Every alignment value shall be a non-negative integral power of two. > > > Upvotes: 2
2018/03/16
639
2,000
<issue_start>username_0: We are currently using a simulation package that outputs data to an Excel spreadsheet. The output cannot be customized to the level that fits the company standards. The good news is that the spreadsheet's format is locked. And the data I need is located in (15) different cells that are scattered all over the spreadsheet. (If they were in a column it'd be easy) I would like to write a VB app in MS Access that would open the file, look at 15 different cells, and then import the data in these cells to a specific field in a table. From there I an do anything I want with the data. But while I'm fairly confident in my abilities with access, I'm having a hard time coming up with code for do what I want to do The 15 pieces of data I need reside in the cells BU22, X38, X41, AX38, AX41, BW38, Q49, Q54, Q61, Q69, Q74, BP68, V86, BH81, & BI84 From what I understand I can use the ws.Range method like this: ``` Dim strSecondValue as String strSecondValue=ws.Range ("BU22") ``` Getting that info from the excel cell to the MS Access table is proving to be difficult. Any help here?<issue_comment>username_1: External data sources can queried in few different ways. ``` SELECT * FROM [Sheet1$A1:A1], [Sheet1$B2:B2] IN 'C:\Book1.xlsx'[Excel 12.0; Hdr=No] ``` To specify field names: ``` select A.F1 as A1, B.F1 as B2 from [Sheet1$A1:A1] as A, [Sheet1$B2:B2] as B in 'C:\Book1.xlsx'[Excel 12.0; Hdr=No] ``` <https://msdn.microsoft.com/en-us/library/bb177907> Upvotes: 1 <issue_comment>username_2: You can simply do this. ``` Sub ImportDataFromRange() ' Delete any previous access table, otherwise the next line will add an additional table DoCmd.DeleteObject acTable, "ExcelRange" ' Import data from Excel using a static range DoCmd.TransferSpreadsheet acLink, acSpreadsheetTypeExcel9, "ExcelStaticRangeData", "C:\your_path_here\ExcelSample.xls", True, "Sheet1!A1:J20" End Sub Private Sub Command0_Click() ImportDataFromRange End Sub ``` Upvotes: 0
2018/03/16
504
1,418
<issue_start>username_0: I want to show all the records in the file, for example all the codes but it only shows me the first record. **demo.xml** ``` xml version="1.0" encoding="UTF-8"? `1000001` The Name The Brand The Category The Color The Size 0 4 5 2 3 `1000002` The Name The Brand The Category The Color The Size 0 4 5 2 3 ``` **index.php** ``` php if (file_exists('demo.xml')) { $xml = simplexml_load_file('demo.xml'); echo $xml-Product->code; } else { exit('Error!'); } ?> ``` **Result:** 1000001 What I'm looking for: 1000001, 1000002 I hope someone can help me because this method is very practical for what I need, it would be great to be able to get all the codes, names and all the information in the same way that I get the first value.<issue_comment>username_1: Looping will be your friend here: ``` $codes = []; $xml = simplexml_load_file('demo.xml'); foreach($xml->Product as $i => $product) { $codes[] = $product->code; } echo implode(', ',$codes); ``` Obviously a short example, but you can access the other items as well, and use them. Upvotes: 3 [selected_answer]<issue_comment>username_2: You could also use [xpath](http://php.net/manual/en/simplexmlelement.xpath.php): ``` $xml = simplexml_load_file('demo.xml'); $code = $xml->xpath('*/code'); echo implode(', ', $code); // 1000001, 1000002 ``` <https://3v4l.org/JjRf1> Upvotes: 0
2018/03/16
408
1,326
<issue_start>username_0: I'm trying to write a php function to check a variable only contains characters from the alphabet, so far I have the following code: ``` if(!preg_match('/[a-z]/i', $name)){ $valid = FALSE; } ``` It only returns false if there are no alphabetic characters in the string to be checked... What am I doing wrong here?<issue_comment>username_1: You should use the builtin function <http://php.net/manual/en/function.ctype-alpha.php> for this purpose: ``` php $strings = array('KjgWZC', 'arf12'); foreach ($strings as $testcase) { if (ctype_alpha($testcase)) { echo "The string $testcase consists of all letters.\n"; } else { echo "The string $testcase does not consist of all letters.\n"; } } ? ``` The above example will output: ``` The string KjgWZC consists of all letters. The string arf12 does not consist of all letters. ``` Source: PHP Manual Upvotes: 3 [selected_answer]<issue_comment>username_2: You forgot a start-of-line (^) and end-of-line ($) signs and you also forgot a quantifier (+ in this case). Your regex pattern should look like this: `'/^[a-z]+$/i'`. ``` var_dump((bool)preg_match('/^[a-z]+$/i', 'foo')); //true var_dump((bool)preg_match('/^[a-z]+$/i', 'foo1')); //false var_dump((bool)preg_match('/^[a-z]+$/i', '123')); //false ``` Upvotes: 1
2018/03/16
958
3,389
<issue_start>username_0: I have a ESP8266, which I use to log weather data over MQTT. Because I want to save some power, I decided to use DeepSleep. Since I want to log the data, it would be good if I could send new entries every minute. This used to work with my old sketch, where I had all data acquisition tasks in the *loop* section, and where I kept the connection to WiFi and the MQTT-Server open. But this does not work with DeepSleep. I need to reconnect after every wake-up and after every wake-up, the ESP8266 basically reboots. Because this does not take exactly the same time on every wake-up, I wanted to know if there is a way to let the ESP8266 log on exactly the same timestamps and go to DeepSleep in between? This is a code sample of the DeepSleep algorithm: ``` String JSON = "{\"sensor\": \"Outdoor Sensor\", \"data\":[" + String(temp) + "," + String(hum) + "," + String(brightness) + "]}"; client.publish(topic, JSON.c_str(), true); //publish data as JSON to MQTT delay(10); //somehow if this is not added, the data does not get logged. Serial.println("Going into deep sleep for 60 seconds"); ESP.deepSleep(56e6); // because of microseconds - processing data takes about 4sec, but this is very unprecise ``` This is from PhpMyAdmin, in order to better visualize the problem: [![enter image description here](https://i.stack.imgur.com/oQBr9.png)](https://i.stack.imgur.com/oQBr9.png) If it can not be done with an ESP8266, might a ESP32 help?<issue_comment>username_1: I have found a (not perfect) workaround. If I do ``` Serial.println("Going into deep sleep so that next data gets posted in a minute"); Serial.println(60e6-micros()); ESP.deepSleep(60e6-micros()); // because of microseconds; micros() is reset on wake-up ``` then I get slightly better results. I do however not understand, why it is not working perfectly. In my opinion it should, since I count the time the whole boot process + code run took and then substract that time from the time the ESP8266 is in DeepSleep. Upvotes: 0 <issue_comment>username_2: It is suggested [using an external RTC](https://arduino.stackexchange.com/questions/91503/how-do-i-wake-the-esp8266-from-deep-sleep-on-a-specific-date-and-time/91504#91504) if time tracking is of importance. Using the internal RTC, improving meassurement of time passage during sleep apparently includes guessing at the sleeping chip's temperature: Time keeping on the ESP8266 is technically quite challenging. Despite being named RTC, while it does keep a counter ticking while the module is sleeping, the accuracy with which it does so is highly dependent on the temperature of the chip. Said temperature changes significantly once in deep sleep mode, calibration performed while the chip was active becomes useless mere moments after the chip has gone to sleep. <http://arduino.stackexchange.com/questions/65530/ddg#65532> Where high accuracy isn't required I would suggest to take a few samples and calculate the average interval error. **example with 5 minutes intervals:** [excel calculation screenshot](https://i.stack.imgur.com/1iPnD.png) ``` int32_t deep_sleep_time_compensation = -145000000; // chip temperature dependant deep sleep duration error compensation in μs ESP.deepSleep(300e6-micros()-deep_sleep_time_compensation); // because of microseconds; micros() is reset on wake-up ``` Upvotes: 2 [selected_answer]
2018/03/16
741
2,452
<issue_start>username_0: I am trying to make a C++ program that accepts a username and password from the user and matches it to the ones that are already stored in the variables. For the password, I am using the \* (asterisk) character to appear instead of the actual characters for privacy. ``` #include #include #include #include void main(void) { clrscr(); gotoxy(10,10); cout << "Username: "; gotoxy(10,12); cout << "Password: "; char name1[10] = {"Apple"}; char name2[10]; char pass1[10] = {"<PASSWORD>"}; char pass2[10] = {""}; gotoxy(23,10); cin >> name2; gotoxy(23,12); cout << pass2; int i = 0; char ch; while ((ch = getch()) != '\r') { putch('\*'); pass2[i] = ch; i++; } if (strcmp(name1, name2) == 0 && strcmp(pass1, pass2) == 0) { clrscr(); gotoxy(40,10); cout << "YES!!!"; } else { clrscr(); gotoxy(40,10); cout << "NO!!!"; } } ``` The problem is when I try to use the backspace key on the keyboard, I doesn't delete the character, instead it adds more characters to the end of it. I can make it work by importing the C language's string.h library. But is there any way I can make it work by using the libraries that are already defined in the code without having to use the string.h library.<issue_comment>username_1: Backspace is considered a "non-printing character", which is where this behavior is coming from. You'll likely have to handle it as you've already thought out in the question. you can click here for a quick glance at this and other non-printing characters. <http://www.physics.udel.edu/~watson/scen103/ascii.html> Upvotes: 0 <issue_comment>username_2: > > The problem is when I try to use the backspace key on the keyboard, I doesn't delete the character, instead it adds more characters to the end of it > > > Correct! When you use the backspace key, it actually generates a backspace *character* (ASCII 0x8). The backspace key doesn't "delete characters"; you only think it does, because software is normally written to look out for backspace characters and behave specially when it encounters them. In this case, the writer of that software is *you*! You'll need to look at the value of `ch`, see whether it's a backspace character … and, if so, "do the needful" (only you can tell what this is). > > I can make it work by importing the C language's string.h library. > > > Including `string.h` has nothing to do with it and will have no effect on this. Upvotes: 1
2018/03/16
353
1,339
<issue_start>username_0: My problem is for how do i test a Webrequest timeout . My C# code is making a SOAP service call to a 3rd party vendor whose code i do not have . When their service goes down , most of my application URLs also go down as well. So I thought of adding a timeout ; and read this blog as well which was helpful , but what would be the best way to test my code ? [Adjusting HttpWebRequest Connection Timeout in C#](https://stackoverflow.com/questions/1500955/adjusting-httpwebrequest-connection-timeout-in-c-sharp)<issue_comment>username_1: Try connecting to an existing internet address but without using a standard port - a port where no service is running. This way the firewall should drop the packet and you should receive a timeout. Eg. Use one of the big internet players but instead of port 80 (http) or 443 (https) use 354 Upvotes: 0 <issue_comment>username_2: You can build a wrapper around the third part call and use the interface to mock your calls made to the third party so that you can return what you are expecting. Upvotes: 0 <issue_comment>username_3: I had the same problem and solved it using <https://httpstat.us/> In my case I needed a 60 secs timeout so I made my request to <https://httpstat.us/200?sleep=120000> and it worked. Posted in case it is useful for anyone. Cheers. Upvotes: 2
2018/03/16
446
1,654
<issue_start>username_0: guys do you have some answer on this a strange bug ? I moved my project from another server and i configured all but i watching some bug Laravel give me a this error : ``` FatalErrorException in /public_html/app/Http/Controllers/GameController.php line 718: Call to undefined function App\Http\Controllers\iconv_strlen() ``` In this code: ``` public function DepositRedirect() { if(Auth::guest()) return 'You must be authorized!'; if(!$this->user->trade) return 'You must set trade list in your profile!'; $bot = DB::table('bots')->first(); if(is_null($bot)) return "Bot not found!"; if(iconv_strlen($bot->trade) < 1) return "Admin hasn't set trade link with bot."; return redirect($bot->trade); } ``` Why ? This is standart php function for returns the character count of string i use php 7.X<issue_comment>username_1: Try connecting to an existing internet address but without using a standard port - a port where no service is running. This way the firewall should drop the packet and you should receive a timeout. Eg. Use one of the big internet players but instead of port 80 (http) or 443 (https) use 354 Upvotes: 0 <issue_comment>username_2: You can build a wrapper around the third part call and use the interface to mock your calls made to the third party so that you can return what you are expecting. Upvotes: 0 <issue_comment>username_3: I had the same problem and solved it using <https://httpstat.us/> In my case I needed a 60 secs timeout so I made my request to <https://httpstat.us/200?sleep=120000> and it worked. Posted in case it is useful for anyone. Cheers. Upvotes: 2
2018/03/16
477
1,168
<issue_start>username_0: I have a list of dictionaries I wish to sort in python by a key 'id'. ``` items = [{'id' : 883},{'id' : 547},{'id' : 898},{'id' : 30},{'id' : 883}] ``` I wish to sort them in a specific order based on this sorting order given: ``` [30, 883, 547, 898] ``` How would I go about doing this in python3?<issue_comment>username_1: Use the `key` argument and a list with the custom sorting order. ``` sort_order = [30, 883, 547, 898] items.sort(key=lambda d: sort_order.index(d['id'])) ``` Using @Sphinx 's recommendation, you could index the list beforehand for some added speed improvement `O(1)` instead of `O(n)` ``` sort_order_index = {val: i for i, val in enumerate(sort_order)} items.sort(key=lambda d: sort_order_index.get(d['id'], 0)) ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: You can try without loop in one line something like this: ``` items = [{'id' : 883},{'id' : 547},{'id' : 898},{'id' : 30},{'id' : 883}] sorting_order=[30, 883, 547, 898] print(sorted(items,key=lambda x:sorting_order.index(x['id']))) ``` output: ``` [{'id': 30}, {'id': 883}, {'id': 883}, {'id': 547}, {'id': 898}] ``` Upvotes: 0
2018/03/16
477
1,250
<issue_start>username_0: i would like to output a calculation of two input fields into a third, while it is typed in (keyup, i think?). The following is just a visual example, cause I don't know how to do it better: ``` $(document).ready(function() { var n1 = $("#n1").val(); var n2 = $("#n2").val(); var sum = ((n1 + n2) / 10) * 2; sum = document.getElementById('sum-out') return sum; }); ``` pls help...<issue_comment>username_1: Use the `key` argument and a list with the custom sorting order. ``` sort_order = [30, 883, 547, 898] items.sort(key=lambda d: sort_order.index(d['id'])) ``` Using @Sphinx 's recommendation, you could index the list beforehand for some added speed improvement `O(1)` instead of `O(n)` ``` sort_order_index = {val: i for i, val in enumerate(sort_order)} items.sort(key=lambda d: sort_order_index.get(d['id'], 0)) ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: You can try without loop in one line something like this: ``` items = [{'id' : 883},{'id' : 547},{'id' : 898},{'id' : 30},{'id' : 883}] sorting_order=[30, 883, 547, 898] print(sorted(items,key=lambda x:sorting_order.index(x['id']))) ``` output: ``` [{'id': 30}, {'id': 883}, {'id': 883}, {'id': 547}, {'id': 898}] ``` Upvotes: 0
2018/03/16
594
2,337
<issue_start>username_0: I have an app that's having some issues on chromebooks and want to stop serving the app to those devices. What's the best way to prevent chromebooks from getting the app? From the docs there seems to be these solutions: <https://developer.android.com/topic/arc/manifest.html> > > 1) Exclude specific devices in the Google Play Console. > > > 2) Filter devices with no touchscreen hardware by explicitly declaring android.hardware.touchscreen as being required in order to install your app. > > > I don't have a chromebook on me, so I can't really test. Are those the best ways?<issue_comment>username_1: > > What's the best way to prevent chromebooks from getting the app? > > > There is no "best". The reliable but difficult to manage approach is the first option you cited: block them in the Play Developer Console. The problem is that the list of Chrome OS devices changes frequently, and you may miss devices. The second option that you cited will not work, as there are plenty of Chrome OS devices that have touchscreens. In fact, a majority might have touchscreens, as they are fairly common among Chromebooks. [The page that you cited](https://developer.android.com/topic/arc/manifest.html) lists many features that Chrome OS devices lack. If Chrome OS devices lacking one of those features is the source of your problem, add the appropriate element. For example, if you need NFC, add to your manifest. That will prevent your app from being installed on *any* incompatible device, including phones or tablets that lack NFC. If the issue is something else, ideally you fix the problem, as the problem may not be unique to Chrome OS. In the end, you're welcome to pretend that your app needs one of those features, just to stay off of Chrome OS devices. For example, as noted above, would keep you off of Chrome OS devices. However, it *also* keeps you off of phones and tablets that lack NFC. Upvotes: 4 [selected_answer]<issue_comment>username_2: **Solution 1** There is not 100% correct but it can exclude Chromebook and tablet from downloading the app, by adding this line in manifest file ``` ``` **Solution 2** ``` ``` **Solution 3** You can exclude the Chromebook devices in Google Play Console > Device Catalog > Filter Chromebook by 'form factor' > Exclude Upvotes: 2
2018/03/16
1,502
4,995
<issue_start>username_0: So, a method would be to `cout<<"";` if the condition is false but I don't feel like it's the right way and `break` doesn't seem to work. Is there something I should know ? ``` template T get(const string &prompt) { cout<>ret; return ret; } int main() { vectorx; for(int i=0;i<1000;i++) x.push\_back((rand()%1200+1200)); int rating=get("Your rating: ");x.push\_back(rating); sortVector(x); for(int i=0;i ```<issue_comment>username_1: Each part of the ternary operator must be able to evaluate to the same type\*, so you can't have one part output (which will return the `cout` object) while the other `break`'s. Instead, in your situation, why don't you add the condition to your for? ``` for(int i = 0; i < x.size() && x[i] == rating; i++) { cout<<"Found your rating on pozition "; } ``` But this might not be what you actually want ============================================ It seems you're trying to find an item's position in the array. If this is the case, and you're only looking for the first occurence, I'd suggest this instead: ``` int pos; for(pos = 0; pos < x.size() && x[pos] != rating; pos++) { } if(pos != x.size()) { cout<<"Found your rating on pozition " << pos; } else { cout << "Couldn't find it!"; } ``` Or even better, use [`std::find`](http://en.cppreference.com/w/cpp/algorithm/find)! --- \*You could also have a throw in there. Thanks to @Lightness! Upvotes: 3 [selected_answer]<issue_comment>username_2: Use of the conditional operator is not the best strategy for what you are trying to do. Simplify your code. Use `if`-`else`. ``` for(int i = 0; i < x.size(); i++) { if (x[i] == rating) { std::cout << "Found your rating on position " << i << ".\n"; break; } else { std::cout << "Still looking for your rating.\n"; } } ``` Upvotes: 2 <issue_comment>username_3: You don't. Get rid of the crypticism, and write out a proper `if` statement like everybody else. Upvotes: 2 <issue_comment>username_4: You can't use `break` in a conditional. To hack this up using a non-throwing conditional (for educational purposes per your comments), you actually need it in the `for` loop as follows: ``` for (int i = 0; i < x.size(); i = x[i] == rating ? ((cout << "found at " << i), x.size()) : i + 1) ; ``` What this does is advance `i` to `x.size()` (so the loop will terminate) when the rating is found (after evaluating output and ignoring it using the comma operator), otherwise moving `i` to the next element index (`i + 1`). Put another way, the use of the conditional operator above is equivalent to: ``` if (x[i] == rating) { std::cout << "found at " << i; i = x.size(); // poor man's "break" } else i = i + 1; ``` (You could of course put the code immediately above in the statement controlled by your `for` loop... `for (size_t i = 0; i < x.size(); ) if (x[i] == ...`), but then you wouldn't be using the conditional operator (and could use an actual break, and might as well use `++i` in the `for` advancement statement and ditch the `else` above, at which point you have username_2's answer). I put a complete demo program on [coliru.stacked-crooked.com](http://coliru.stacked-crooked.com/a/11a4aea64b69a3cf). --- As I mentioned in a comment, you can't `break` in a conditional, but you can `throw`. Using exceptions for an event you expect to occur is generally frowned upon (a coding convention that reserves them for unexpected events helps other programmers understand how you're expecting your program to run), but it's possible: ``` int main() try { ... for (size_t i = 0; i < x.size(); ++i) x[i] == rating ? throw i : (void)0; } catch (size_t i) { std::cout << "found at " << i << '\n'; } ``` You can find/run/edit this on coliru [here](http://coliru.stacked-crooked.com/a/4ed5e6dd8e8d6c8f). Note `(void)0` is evaluated but does nothing - it has no side effects. You could use any expression there - `0`, `'x'`, `nullptr` - but prefixing with `(void)` emphasises that the value can't and won't be read and used for anything. So it would be equivalent, clearer and more concise to use `if (x[i] == rating) throw i;` if we weren't set on using the conditional operator. --- How you should actually write it if you're not trying to teach yourself how to abuse conditional operators: ``` std::cout << "found at " << x.find(rating) - x.begin() << '\n'; ``` Often when you use `find` you should be checking against `x.end()` to see if your value was found, but in your case you know 100% that the value is in the vector so can safely keep the code simple. Upvotes: 0 <issue_comment>username_5: It's possible with the GNU statement-expression extension: `({ break; })` is an expression of type `void`. But there is never reason to do that except in case of extreme macro hackery, which you probably shouldn't be doing yourself anyway. Upvotes: 2
2018/03/16
836
3,259
<issue_start>username_0: I have some app. I wrote and tested it on Android 5.1. When I tried to run my app on Android 6.0 I've got the exception. The exception appeare after I try to bind a FloatingActionButton and set OnClickListener because of null. FindViewById returns null when I try to get it with FrameLayout from "include" directive. **MainActivity.java** ``` public class MainActivity extends AppCompatActivity { private Context instance; private MainPagerAdapter mSectionsPagerAdapter; public ViewPager mViewPager; FloatingActionButton fab; FloatingActionButton fab_settings; TabLayout tabLayout; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); Toolbar toolbar = (Toolbar) findViewById(R.id.toolbar); setSupportActionBar(toolbar); mSectionsPagerAdapter = new MainPagerAdapter(getSupportFragmentManager(), this); mViewPager = (ViewPager) findViewById(R.id.container); mViewPager.setAdapter(mSectionsPagerAdapter); tabLayout = (TabLayout) findViewById(R.id.tabs); tabLayout.setupWithViewPager(mViewPager); fab = (FloatingActionButton) findViewById(R.id.fab); fab.setOnClickListener(fabOnClick); fab_settings = (FloatingActionButton) findViewById(R.id.fab_settings); // <---- Returns null fab_settings.setOnClickListener(setsOnClick); // <---- Exception } } ``` **activity\_main.xml** ``` xml version="1.0" encoding="utf-8"? ``` **fab\_layout.xml** ``` ``` When I use SDK >= 23 I have the exception and if SDK < 23 it's OK. How can I solve it? **UPDATE:** In `activity_main.xml` I changed from: ``` ``` to: ``` ``` After it `fab_settings` is still null. I don't understand why!<issue_comment>username_1: You can write something like this for FloatingActionButton for the setOnClickListerner method. ``` fab.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { //Write here anything that you wish to do on click of FAB } }); ``` Upvotes: -1 <issue_comment>username_2: First find framelayout with fab\_frame id. Second framelayout.findviewbyid(your fab id) Check this question [findViewById not working for an include?](https://stackoverflow.com/questions/10803672/findviewbyid-not-working-for-an-include) Upvotes: 0 <issue_comment>username_3: You haven't declared "fabOnClick" & "setsOnClick". Declare them, hopefully it will solve the problem. ``` fab = (FloatingActionButton) findViewById(R.id.fab); fab.setOnClickListener(fabOnClick); fab_settings = (FloatingActionButton) findViewById(R.id.fab_settings); // fab_settings.setOnClickListener(setsOnClick); // <---- Exception ``` Upvotes: 0 <issue_comment>username_4: ``` View frame = findViewById(R.id.fab_frame); fab_settings = frame.findViewById(R.id.fab_settings); fab_settings.setOnClickListener(setsOnClick); ``` Upvotes: 0 <issue_comment>username_5: I solved my problem. It was my mistake. I didn't notice the new folder `layout-v23` where the activity was exist. My problem was solved after deleting `layout-v23`. Thank all of you. Upvotes: 1
2018/03/16
1,005
3,100
<issue_start>username_0: I build up an array with a counter when a function is true. So if it were true 3 times in a row, the array looks like [1,2,3]. If the function is not true there is a gap in the counter and could look like this [1,2,3,5]. In another function I need to determine if the array length is > 2 and the values in the array are in consecutive order. So [1,2,3] it should return true. If [1,2,3,5] it should return false. I haven't found anything that's worked. Any help with a possible solution would be much appreciated. I have seen this (and have tried it) but it doesn't work. ``` Array.prototype.is_consecutive = (function () { var offset = 0; // remember the last offset return function () { var start = offset, len = this.length; for (var i = start + 1; i < len; i++) { if (this[i] !== this[i - 1] + 1) { break; } } offset = i; return this[start]; }; })(); ```<issue_comment>username_1: If you can absolutely rely on the algorithm that populates your array, following the stated rules > > I build up an array with a counter when a function is true. So if it were true 3 times in a row, the array looks like [1,2,3]. If the function is not true there is a gap in the counter and could look like this [1,2,3,5]. > > > Then you all you need to do is check that the last element of the array is the same value as the length of the array: ```js var good = [1, 2, 3]; var bad = [1, 2, 3, 5]; var isValid = function(arr) { return (arr.length > 2 && arr[arr.length - 1] === arr.length); // Thanks to <NAME>. } console.log(isValid(good)); console.log(isValid(bad)); ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: ``` Array.prototype.isConsecutive = function(){ for(var i = 1; i < this.length; i++) if(this[i] !== this[i - 1] + 1) return false; return true; }; ``` Or the obfuscated version: ``` const isConsecutive = arr => !!arr.reduce((prev, curr) => curr === prev + 1 ? curr : 0); ``` Upvotes: 0 <issue_comment>username_3: You could use a closure over the first index value and increment that value while checking. ```js function isConsecutive(array) { return array.length >= 2 && array.every((v => a => a === v++)(array[0])); } console.log([[1, 2, 3, 4], [1, 2, 3, 5]].map(isConsecutive)); ``` Upvotes: 1 <issue_comment>username_4: ```js var array=[5,6,7,8]; function continuous(arr){ var max = Math.max(...arr); var min = Math.min(...arr); return (((max-min)+1) * (min + max) / 2) == arr.reduce((a, b) => a + b, 0) } console.log(continuous(array)) ``` Upvotes: 0 <issue_comment>username_5: You can use this too, it is recursive: ```js Array.prototype.isConsecutive = function(){ if (arguments.length==0) return this.isConsecutive(0); else if (arguments[0]>=this.length) return true; else return (this[arguments[0]]==arguments[0]+1&&this.isConsecutive(arguments[0]+1 ));} cons=[1,2,3]; console.log(cons.isConsecutive()); non_cons=[1,2,2,4]; console.log(non_cons.isConsecutive()); ``` Upvotes: 0
2018/03/16
868
2,654
<issue_start>username_0: Let's say I have these 3 models ``` class Restaurant(models.Model): name = models.CharField(...) class Eater(models.Model): name = models.CharField(...) class Transaction(models.Model): eater = models.ForeignKey('Eater', related_name='transactions') restaurant = models.ForeignKey('Restaurant', related_name='transactions') ``` How can I write an endpoint like `eater/1/restaurant` to query for all the restaurant that `eater1` has a transaction with? My DB is in PostgreSQL if that matters.<issue_comment>username_1: If you can absolutely rely on the algorithm that populates your array, following the stated rules > > I build up an array with a counter when a function is true. So if it were true 3 times in a row, the array looks like [1,2,3]. If the function is not true there is a gap in the counter and could look like this [1,2,3,5]. > > > Then you all you need to do is check that the last element of the array is the same value as the length of the array: ```js var good = [1, 2, 3]; var bad = [1, 2, 3, 5]; var isValid = function(arr) { return (arr.length > 2 && arr[arr.length - 1] === arr.length); // Thanks to Jonas W. } console.log(isValid(good)); console.log(isValid(bad)); ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: ``` Array.prototype.isConsecutive = function(){ for(var i = 1; i < this.length; i++) if(this[i] !== this[i - 1] + 1) return false; return true; }; ``` Or the obfuscated version: ``` const isConsecutive = arr => !!arr.reduce((prev, curr) => curr === prev + 1 ? curr : 0); ``` Upvotes: 0 <issue_comment>username_3: You could use a closure over the first index value and increment that value while checking. ```js function isConsecutive(array) { return array.length >= 2 && array.every((v => a => a === v++)(array[0])); } console.log([[1, 2, 3, 4], [1, 2, 3, 5]].map(isConsecutive)); ``` Upvotes: 1 <issue_comment>username_4: ```js var array=[5,6,7,8]; function continuous(arr){ var max = Math.max(...arr); var min = Math.min(...arr); return (((max-min)+1) * (min + max) / 2) == arr.reduce((a, b) => a + b, 0) } console.log(continuous(array)) ``` Upvotes: 0 <issue_comment>username_5: You can use this too, it is recursive: ```js Array.prototype.isConsecutive = function(){ if (arguments.length==0) return this.isConsecutive(0); else if (arguments[0]>=this.length) return true; else return (this[arguments[0]]==arguments[0]+1&&this.isConsecutive(arguments[0]+1 ));} cons=[1,2,3]; console.log(cons.isConsecutive()); non_cons=[1,2,2,4]; console.log(non_cons.isConsecutive()); ``` Upvotes: 0
2018/03/16
427
1,359
<issue_start>username_0: I have a sequence of strings which I get in following format: * Project1:toyota:Corolla * Project1:Hoyota:Accord * Project1:Toyota:Camry As you can see middle section of the string is not consistent case (for Corolloa, it is listed as toyota). I need to change above as follows: * Project1:**T**oyota:Corolla * Project1:**H**oyota:Accord * Project1:**T**oyota:Camry I want to make middle section of the string to be Title Case. I am using following ``` static TextInfo textInfo = new CultureInfo( "en-US" ).TextInfo; ``` and using .ToTitleCase but the issue with TitleCase is if the string is in UPPERCASE, it would not change to TitleCase. Do we know how to handle a case when string is uppercase.<issue_comment>username_1: You can use `.ToTitleCase()` ``` var myString = "Project1:toyota:Corolla"; TextInfo textInfo = new CultureInfo( "en-US" ).TextInfo; myString = textInfo.ToTitleCase(myString); ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: You could use [TextInfo.ToTitleCase](http://msdn.microsoft.com/en-us/library/system.globalization.textinfo.totitlecase.aspx) ``` textInfo.ToTitleCase("Project1:toyota:Corolla") ``` Upvotes: 2 <issue_comment>username_3: Regular expression alternative: ``` var result = Regex.Replace("Project1:toyota:Corolla", @"\b[a-z]", m => m.Value.ToUpper()); ``` Upvotes: 0
2018/03/16
698
2,479
<issue_start>username_0: Below is my console log: ``` jy03154586@ubuntu:~/Works$ docker build -t ui . Sending build context to Docker daemon 50.18MB Step 1/9 : FROM node:carbon-alpine as build ---> adc4b0f5bc53 ... Step 9/9 : COPY --from=build /app/dist /usr/share/nginx/html ---> 997cc6252b83 Successfully built 997cc6252b83 Successfully tagged ui:latest jy03154586@ubuntu:~/Works$ docker images ls REPOSITORY TAG IMAGE ID CREATED SIZE jy03154586@ubuntu:~/Works$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES jy03154586@ubuntu:~/Works$ ``` I tried to build an angularjs application image, but where is the built image? It shows `successfully built`<issue_comment>username_1: Leave off the `ls` - that will filter the results. Just do `docker images`. Upvotes: 3 [selected_answer]<issue_comment>username_2: docker image ls docker images ls docker images Are all different commands You want docker image ls --filter reference=ui:latest Upvotes: 2 <issue_comment>username_3: $ **docker images --help** Usage: docker images [OPTIONS] [REPOSITORY[:TAG]] List images Options: -a, --all Show all images (default hides intermediate images) --digests Show digests -f, --filter filter Filter output based on conditions provided --format string Pretty-print images using a Go template --no-trunc Don't truncate output -q, --quiet Only show numeric IDs **$ docker image ls --help** Usage: docker image ls [OPTIONS] [REPOSITORY[:TAG]] List images Aliases: ls, images, list Options: -a, --all Show all images (default hides intermediate images) --digests Show digests -f, --filter filter Filter output based on conditions provided --format string Pretty-print images using a Go template --no-trunc Don't truncate output -q, --quiet Only show numeric IDs As you see docker image ls OR docker images OR docker image list does all the same thing. > > Helps to list all the images build on a docker host > > > List all images tagged as ui **docker images ui** Now if you are referring to "build" NAME in your [multi-stage build](https://docs.docker.com/develop/develop-images/multistage-build/) then it would be listed in your output for "docker images" with REPOSITORY ID as & TAG as . Upvotes: 0 <issue_comment>username_4: use this: ``` sudo systemctl daemon-reload sudo systemctl restart docker ``` Upvotes: 2
2018/03/16
998
4,571
<issue_start>username_0: I'm new to automating webpage access, so forgive what is probably a remedial question. I'm using C#/Windows.Forms in a console app. I need to programmatically enter the value of an input on a webpage that I cannot modify and that is running javascript. I have successfully opened the page (triggering `WebBrowser.DocumentCompleted).` I set browser emulation mode to IE11 (in registry), so scripts run without errors. When DocumentCompleted() triggers, I am unable to access the document elements without first viewing the document content via `MessageBox.Show()`, which is clearly not acceptable for my unattended app. What do I need to do so that my document elements are accessbile in an unattended session (so I can remove MessageBox.Show() from the code below)? Details below. Thank you. The input HTML is: My DocumentCompleted event handler is: ``` private static void LoginPageCompleted(object sender, WebBrowserDocumentCompletedEventArgs e) { WebBrowser wb = ((WebBrowser)sender); var document = wb.Document; // I'm trying to eliminate these 3 lines var documentAsIHtmlDocument = (mshtml.IHTMLDocument)document.DomDocument; var content = documentAsIHtmlDocument.documentElement.innerHTML; MessageBox.Show(content); String classname = null; foreach (HtmlElement input in document.GetElementsByTagName("input")) { classname = input.GetAttribute("className"); if (classname == "input-class") { input.SetAttribute("value", password); break; } } } ```<issue_comment>username_1: If the problem comes from the creation of a STAThread, necessary to instantiate the underlying Activex component of WebBrowser control, this is a modified version of [Hans Passant's code](https://stackoverflow.com/questions/4269800/webbrowser-control-in-a-new-thread?answertab=active#tab-top) as shown in the SO Question you linked. Tested in a Console project. ``` class Program { static void Main(string[] args) { NavigateURI(new Uri("[SomeUri]", UriKind.Absolute), "SomePassword"); Console.ReadLine(); } private static string SomePassword = "<PASSWORD>"; private static void NavigateURI(Uri url) { Thread thread = new Thread(() => { WebBrowser browser = new WebBrowser(); browser.DocumentCompleted += browser_DocumentCompleted; browser.Navigate(url); Application.Run(); }); thread.SetApartmentState(ApartmentState.STA); thread.Start(); } protected static void browser_DocumentCompleted(object sender, WebBrowserDocumentCompletedEventArgs e) { WebBrowser browser = ((WebBrowser)sender); if (browser.Url == e.Url) { while (browser.ReadyState != WebBrowserReadyState.Complete) { Application.DoEvents(); } HtmlDocument Doc = browser.Document; if (Doc != null) { foreach (HtmlElement input in Doc.GetElementsByTagName("input")) { if (input.GetAttribute("type") == "password") { input.InnerText = <PASSWORD>Password; //Or //input.SetAttribute("value", <PASSWORD>Password); break; } } } Application.ExitThread(); } } } ``` Upvotes: 0 <issue_comment>username_2: The problem for me was that the page I'm accessing is being created by javascript. Even though documentComplete event was firing, the page was still not completely rendered. I have successfully processed the first page by waiting for the document elements to be available and if not available, doing `Application.DoEvents();` in a loop until they are, so I know now that I'm on the right track. This SO Question helped me: [c# WebBrowser- How can I wait for javascript to finish running that runs when the document has finished loading?](https://stackoverflow.com/questions/20693266/c-sharp-webbrowser-how-can-i-wait-for-javascript-to-finish-running-that-runs-wh?rq=1) Note that checking for DocumentComplete does not accurately indicate the availability of the document elements on a page generated by javascript. I needed to keep checking for the elements and running `Application.DoEvents()` until they became available (after the javascript generated them). Upvotes: 1
2018/03/16
904
2,938
<issue_start>username_0: How can I split this? ``` 'Symptoms may include:Absent or small knucklesCleft palateDecreased skin creases at finger jointsDeformed earsDroopy eyelidsInability to fully extend the joints from birth (contracture deformity)Narrow shouldersPale skinTriple-jointed thumbs' ``` Desired Output should take this form ``` Symptoms may include: Absent or small knuckles Cleft palate Decreased skin creases at finger joints Deformed ears Droopy eyelids Inability to fully extend the joints from birth (contracture deformity) Narrow shoulders Pale skin Triple-jointed thumbs ``` Like split on Capital letters.<issue_comment>username_1: Use `re.findall` (pattern improved thanks to @Brendan Abel and @JFF): ``` fragments = re.findall('[A-Z][^A-Z]*', text) ``` ``` print(fragments) ['Symptoms may include:', 'Absent or small knuckles', 'Cleft palate', 'Decreased skin creases at finger joints', 'Deformed ears', 'Droopy eyelids', 'Inability to fully extend the joints from birth (contracture deformity)', 'Narrow shoulders', 'Pale skin', 'Triple-jointed thumbs'] ``` **Details** ``` [A-Z] # match must begin with a uppercase char [^A-Z]* # further characters in match must not contain an uppercase char ``` Note: `*` lets you capture sentences with a single upper-case character. Substitute with `+` if that is not desired functionality. Also, if you want your output as a multiline string: ``` print('\n'.join(fragments)) ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: ``` >>> s = 'Symptoms may include:Absent or small knucklesCleft palateDecreased skin creases at finger jointsDeformed earsDroopy eyelidsInability to fully extend the joints from birth (contracture deformity)Narrow shouldersPale skinTriple-jointed thumbs' >>> print(''.join(('\n' + c if c.isupper() else c) for c in s)[1:]) Symptoms may include: Absent or small knuckles Cleft palate Decreased skin creases at finger joints Deformed ears Droopy eyelids Inability to fully extend the joints from birth (contracture deformity) Narrow shoulders Pale skin Triple-jointed thumbs ``` ### How it works * `(('\n' + c if c.isupper() else c) for c in s)` The above generates a list of each character `c` in string `s` except if `c` is upper case in which case it prepends a new line to that character. * `''.join(('\n' + c if c.isupper() else c) for c in s))` This joins the list back together into a string. * `''.join(('\n' + c if c.isupper() else c) for c in s)[1:]` This removes the extra newline from the beginning of the string. Upvotes: 2 <issue_comment>username_3: I think the following code can be interesting ``` import re output = re.sub( r"([A-Z])", r"\n\1", inputString) print(output) ``` you can also store it back in list by splitting all the `\n` ``` outputList = output.split('\n')[1::] ``` This initially replaces all the capital letters with a `\n` and then the capital letter Upvotes: -1
2018/03/16
1,005
3,268
<issue_start>username_0: I am trying to understand processes in C. I currently want to create shell-like structure which - after pressing a shortcut like `Ctrl`+`C` or `Ctrl`+`Z` will kill all its subprocesses but will stay alive. My code looks like this: ``` #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include pid\_t pid; void send\_signal(int signum){ kill(pid, signum); } void init\_signals(){ signal(SIGINT, send\_signal); signal(SIGTSTP, send\_signal); } int main(){ init\_signals(); pid = fork(); if(pid > 0){ //Parent Process wait(NULL); } else { // Child Process while(1){ usleep(300000); } } return 0; } ``` Problem here is that, when I press Ctrl+C, parent redirects it to child and kills it but when I press Ctrl+Z (even though child process is stopped) parent still hangs on `wait(NULL)`. Any suggestions on how to fix this?<issue_comment>username_1: You can check here [how to use wait in C](https://stackoverflow.com/questions/23709888/how-to-use-wait-in-c#23710156) . Long story short: > > The wait system-call puts the process to sleep and waits for a child-process to end. It then fills in the argument with the exit code of the child-process (if the argument is not NULL). > > > `wait` doesn't get signaled until the child process ends, so just by sending the child to sleep there is no reason for the main process to continue. If you want any setup where the main process still works while the child does as well (including when it sleeps!) you can't wait on the child. Wouldn't make sense for a shell either - it's always active in the background. Instead you need a better handler on main - like waiting on a condition. That way, when sending a child to sleep, you can signal the condition and keep going. Upvotes: 2 [selected_answer]<issue_comment>username_2: Apart from the solution at <https://stackoverflow.com/a/49346549/5694959> I would like to suggest one more solution as to handle signals for parent process only.This way parent will execute signal handler and default action will be performed for child process. Use `waitpid()` to get the status of child. > > waitpid(pid, NULL, WUNTRACED); > > > Now parent will resume its execution when child process changes its state i.e. either terminated or stopped. Update your code as follows: ``` #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include pid\_t pid; void send\_signal(int signum){ kill(pid, signum); } void init\_signals(){ signal(SIGINT, send\_signal); signal(SIGTSTP, send\_signal); } int main(){ pid = fork(); if(pid > 0){ //Parent Process init\_signals(); waitpid(pid, NULL, WUNTRACED); printf("I think this is what you are expecting...\n"); } else { // Child Process while(1){ usleep(300000); } } return 0; } ``` Just one thing to keep in mind that please make sure that parent process has handled signal before you press `ctrl` + `c` or `ctrl` + `z` otherwise, default action of signal will be performed for parent as well. Upvotes: 0
2018/03/16
824
2,935
<issue_start>username_0: I noticed that button gets classes cdk-focused and cdk-program-focused added *after* the dialog it triggered is closed. If I click anywhere effect disappears. **app.component.html** [fragment] ``` delete ``` [![Illustration](https://i.stack.imgur.com/FA3Fa.gif)](https://i.stack.imgur.com/FA3Fa.gif)<issue_comment>username_1: 1. Add to your button in HTML some id. In my case it's **#changeButton**: ```html edit ``` 2. Import ViewChild and ElementRef in the .ts file: ```typescript { ViewChild, ElementRef } from '@angular/core'; ``` 3. Declare a new variable in the .ts file: ```js @ViewChild('changeButton') private changeButton: ElementRef; ``` 4. Subscribe to your dialog's **afterClose()** event and remove the **cdk-program-focused** css class: ```js dialogRef.afterClosed().subscribe(result => { this.changeButton['_elementRef'].nativeElement .classList.remove('cdk-program-focused'); } ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: In my case, the real problem stay in button structure, 'material' build various sub components and last one is a 'div' with css class 'mat-button-focus-overlay': My solution is simply, in 'style.css' file, add or sobrescribe the 'mat-button-focus-overlay' ``` .mat-button-focus-overlay { background-color: transparent!important; } ``` Upvotes: 2 <issue_comment>username_3: This style fixed for me: ``` .mat-button, .mat-flat-button, .mat-stroked-button { &.cdk-program-focused .mat-button-focus-overlay { opacity: 0 !important; } } ``` Upvotes: 1 <issue_comment>username_4: You can simply over ride the css in the mat-dialog consuming file ``` .mat-button-focus-overlay { background-color: transparent!important; } ``` In your case, add this css in `mat-cell` component's css file. Upvotes: 0 <issue_comment>username_5: Use it like this ``` .mat-icon-button, a.mat-button { &.cdk-focused, &.cdk-program-focused { background-color: transparent; .mat-button-focus-overlay { display: none; } } } ``` Upvotes: 1 <issue_comment>username_6: Based off @username_1, I found that some buttons still retain some form of focus. Instead of removing the classes, I call blur on the element, this solves my problem. ```js dialogRef.afterClosed().subscribe(result => { this.changeButton['_elementRef'].nativeElement.blur(); } ``` Upvotes: 0 <issue_comment>username_7: Here the better solution is using the configuration property `restoreFocus: false` ``` matDialog.open(, { restoreFocus: false }); ``` In this case, when you close matDialog using matDialogRef.close(), the focus will not apply to that delete button/icon. Refer: <https://github.com/angular/components/issues/8420> - it has other alternative solutions including `restoreFocus: false`. Upvotes: 3
2018/03/16
2,916
10,007
<issue_start>username_0: I have a program that installs an updated database monthly into a special software we use. I get an .exe, run it, the .exe "installs" a bunch of DBF/CDX files into a folder, and then "hooks up" the database info into our software somehow. Here is what the "installation" output folder looks like every month: [![Here is what the output folder looks like every month:](https://i.stack.imgur.com/4i7kF.jpg)](https://i.stack.imgur.com/4i7kF.jpg) I've opened the DBF I'm most interested pulling info from (parts.dbf) (with at least 4 different pieces of software I believe) and browsed the data. Most of the fields look fine, readable, all is good. However, the 2 fields that I NEED (Prices and Part Numbers) are unreadable. In the Parts column all of the fields show 10 or 12 characters followed by a bunch of 9's (examples:<\MFMIFJHMFll999999999999999999, KI9e^Z]pbk^999999999999999999, JIFIPKMFL999999999999999999999). In the Price column its similar, just not as many characters (examples: LJKLGIQ999, IGII999999, JMQJGLL999). Here is a screenshot of what I'm seeing exactly: [![Here is a screenshot of what I'm seeing exactly:](https://i.stack.imgur.com/kKB6Z.jpg)](https://i.stack.imgur.com/kKB6Z.jpg) I have googled just about everything I know to google. I've downloaded different programs, tried to pull the data into Crystal Reports, tried to encode it differently (not sure I did that right, though), tried to figure out how to decrypt it (that journey was short-lived because it was so over my head), and just generally been pulling my hair out over this for weeks. I don't know what to do because I don't even really know where to begin. I'm just stabbing in the dark. I *THINK* this file was created in some version of FoxPro but I could be wrong. When I view the information in our software it all shows up fine. Part Numbers and Prices look like readable human characters. Example of data in our software: [![Example of data in our software:](https://i.stack.imgur.com/vhE2F.jpg)](https://i.stack.imgur.com/vhE2F.jpg) I'm out of ideas. I need to know what I'm working with so I can work on figuring out how to "fix it". Is this a FoxPro file? Is it encoded in a way that I need to change? Is it encrypted data in those two fields? Am I way off on everything? Ideally, I'd love to pull this data into Crystal Reports and do my reporting thing with the data. Even Excel could probably work okay. As it stands though I can't do much reporting with a bunch of weird characters and 9's. Any help with this would be greatly appreciated. Screenshot of Schema, per comment section: [![enter image description here](https://i.stack.imgur.com/XKdO8.jpg)](https://i.stack.imgur.com/XKdO8.jpg)<issue_comment>username_1: > > I THINK this file was created in some version of FoxPro > > > While the DBF Data Tables were CREATED by Foxpro, they are POPULATED by an APPLICATION which may or may not have been written in Foxpro. And yes, you do not need to worry about the CDX files unless you want to organize (sequence) the data by one of its Indexes or to establish Relationships between multiple Data Tables for processing purposes. However unless you were to do that using Foxpro/Visual Foxpro itself, it wouldn't be of use to you anyway. From the comments that you have already received, it looks as though the developers of the APPLICATION that writes the field values into the DBF Data Tables might have encrypted the data. And it also seems like you may have found how to decrypt it using the suggestions above. > > I'm no programmer unfortunately > > > If that is the case then I'd suggest that you **STOP RIGHT NOW** before you introduce more problems than you want. Blindly 'mucking' around with the data might just make things worse. If this project is **BUSINESS CRITICAL** then you should hire a software consultant familiar with Foxpro/Visual Foxpro to get the work done - after which you can do whatever you want. Remember that if something is **BUSINESS CRITICAL** then it is worth spending the $$$$ Good Luck Upvotes: 0 <issue_comment>username_2: Yes, 0x03 in header's first byte it is a Foxbase table. As cHao already pointed out, the author decided to create those columns with some byte shifting of each character (I wouldn't call that encryption though, too easy to solve for any programmer - or non-programmer with some pattern discovery). Now the question is how you can utilize that data without damaging the original. One idea is to take a copy, alter the data in it and use that copy instead. Doing that with some computer language is easy when you are a programmer, but you are saying you are not. Then comes the question, which language code you could simply get and compile on your computer. Well I wanted to play with this as a skill testing for myself and came up with some C# code. It was quite easy to write, and compile on any windows machine (so I thought, I had been doing that since years ago). I was mistaken, I don't know why nor have a will to investigate, but the executable created using command line compiler (any windows have it already) is blocked by my antivirus! I signed it but nothing changed. I gave up very quickly. Luckily there was another choice which I think is better anyways. Go < g > write and compile with Go - the fantastic language from Google. If you want to spare your 10-15 mins at most to it, I will give you the code and how to compile it into an exe on your computer. First here is the code itself: ``` package main import ( "fmt" "log" "path/filepath" "strings" "os" "io" "time" "github.com/jmoiron/sqlx" _ "github.com/mattn/go-adodb" ) func main() { if len(os.Args) != 2 { log.Fatal("You need to supply an input filename.") } source := os.Args[1] if _, err := os.Stat(source); os.IsNotExist(err) { log.Fatal(fmt.Sprintf("File [%s] doesn't exist.", source)) } log.Println(fmt.Sprintf("Converting [%s]...", source)) saveAs := GetSaveAsName(source) log.Println(fmt.Sprintf("Started conversion on copy [%s]", saveAs)) ConvertData(saveAs) log.Println("Conversion complete.") } func ConvertData(filename string) { srcBytes := make([]byte, 127-32-1) dstBytes := make([]byte, 127-32-1) for i := 32; i < 34;i++ { srcBytes[i-32]=byte(i+25) dstBytes[i-32]=byte(i) } for i := 35; i < 127; i++ { srcBytes[i-33] = byte(i+25) dstBytes[i-33] = byte(i) } src := string(srcBytes) + string(byte('"')+25) dst := string(dstBytes) dbPath, dbName := filepath.Split(filename) db, err := sqlx.Open("adodb", `Provider=VFPOLEDB;Data Source=` + dbPath) e(err) defer db.Close() stmt := fmt.Sprintf(`update ('%s') set p_part_num = chrtran(p_part_num, "%s", "%s"+'"'), p_price = chrtran(p_price, "%s", "%s"+'"')`, dbName, src, dst, src, dst) _, err = db.Exec(stmt) e(err) } func GetSaveAsName(source string) string { fp, err := filepath.Abs(source) e(err) dir, fn := filepath.Split(fp) targetFileName := filepath.Join(dir, fmt.Sprintf("%s_copy%d.dbf", strings.Replace(strings.ToLower(fn), ".dbf", "", 1), time.Now().Unix())) e(err) in, err := os.Open(source) e(err) defer in.Close() out, err := os.Create(targetFileName) e(err) defer out.Close() _, err = io.Copy(out, in) e(err) err = out.Close() e(err) return targetFileName } func e(err error) { if err != nil { log.Fatal(err) } } ``` And here are the steps to create an executable out of it (and have Go as a language on your computer for other needs:) * Download [Go language](http://golang.org) from Google and install it. Its installer is simple to use and finish in a few seconds. * Open a command prompt. Type: Go version [enter] -You should see installed Go's version (as of now 1.10). -Type ``` Go env [enter] ``` and check GOPATH , it points the base folder for your go projects. Go to that folder and create 4 folders named: bin, pkg, src and vendor By default GOPATH is "Go" under your home folder, looks like this: c:\users\myUserName\Go after creating folders you would have: ``` c:\users\myUserName\Go c:\users\myUserName\Go\bin c:\users\myUserName\Go\pkg c:\users\myUserName\Go\src c:\users\myUserName\Go\vendor ``` using any text editor (Notepad.exe for example) copy & paste and save the code as say "MyCustomConverter.go" into src folder. Code has 2 external libraries that you need to get. Change directory to your GOPATH (not really necessary but my habit at least) and get those libraries typing: ``` cd %GOPATH% go get -v github.com/jmoiron/sqlx go get -v github.com/mattn/go-adodb ``` You are ready to compile your code. ``` cd src set GOARCH=386 go build MyCustomConverter.go ``` This would create MyCustomConverter.exe that you can use for conversion. > > set GOARCH=386 is needed in this special case, because VFP OLEDB driver is 32 bits driver. > > > Oh I forgot to tell, it uses VFPOLEDB driver, which you can [download from here](https://www.microsoft.com/en-us/download/details.aspx?id=14839) and install. You would use the executable like this: MyCustomConverter.exe "c:\My Folder\parts.dbf" and it would create a modified version of that named as: "c:\My Folder\parts\_copyXXXXXXXXXX.dbf" where XXXXXXXXXXX would be a timestamp value (so whenever you run you create another copy, it doesn't overwrite on to one that may exist). Instead of going to command prompt everytime and typing the fullpath of your parts table, you could copy the MyCustomConverter.exe file on to desktop and drag & drop your parts.dbf on to that exe from windows explorer. (It was a nice exercise for my Go coding - there would be critics such as why I didn't use parameters but I really had good reasons, driver and the Go library support namely:) Upvotes: 1
2018/03/16
669
2,166
<issue_start>username_0: I have made a page (on Shopify) and I made there a fixed "go to top" arrow on the left bottom. It's okay, but when I scroll to the page bottom the arrow is will be in front of the footer, and I want it to stay above the footer. Here is the code that I use: ```js $(document).ready(function() { $(window).scroll(function() { if ($(this).scrollTop() > 200) { $('.go-top').fadeIn(200); } else { $('.go-top').fadeOut(200); } }); $('.go-top').click(function(event) { event.preventDefault(); $('html, body').animate({scrollTop: 0}, 300); }) }); ``` ```css .go-top { position: fixed; bottom: 2em; right: 0.5em; text-decoration: none; font-size: 40px; } ``` ```html [↑](#) ```<issue_comment>username_1: add z-index to the css. something like: ``` z-index: 100000 ``` make the number as large as needed for it to be on top. Upvotes: 2 <issue_comment>username_2: ```js $(document).ready(function() { $(window).scroll(function() { //--------------------------- Lines added ------------------------// var footertotop = ($('.footer').position().top); var scrolltop = $(document).scrollTop() + window.innerHeight; var difference = scrolltop-footertotop; if (scrolltop > footertotop) { $('.go-top').css({'bottom' : difference}); }else{ $('.go-top').css({'bottom' : 10}); }; //--------------------------- end ---------------------------------// if ($(this).scrollTop() > 200) { $('.go-top').fadeIn(200); } else { $('.go-top').fadeOut(200); } }); $('.go-top').click(function(event) { event.preventDefault(); $('html, body').animate({scrollTop: 0}, 300); }) }); ``` ```css .container{height:2000px;position:relative} .footer{height:200px;width:100%;position:absolute;bottom:0px;background:red} .go-top { position: fixed; bottom: 20px; display:none; // <---- Dont display on page load right: 0.5em; text-decoration: none; font-size: 40px; } ``` ```html [↑](#) ``` Upvotes: 2
2018/03/16
1,311
4,075
<issue_start>username_0: I'm doing a parser for build outputs, and I'd like to highlight different patterns in different colors. So for example, I'd like to do: ``` sed -e "s|\(Error=errcode1\)|\1<\_red>|" \ -e "s|\(Error=errcode2\)|\1<\_orange>|" \ -e "s|\(Error=.\*\)|\1<\_blue>|" ``` (so it higlights errcode1 in red, errcode2 in orange, and anything else in blue). The problem with this is that `Error=errcode1` matches both the first and the third expression, which will result in `Error=errcode1<\_red><\_blue>`... Is there any way to tell sed to match only the first expression, and if it does, do not try the following expressions? Note, the sed command will actually be auto-generated from files which will be very volatile, so I'd like a generic solution where I don't have to police whether patterns conflict...<issue_comment>username_1: Let's start with a simpler example to illustrate the problem. In the code below, both substitutions are performed: ``` $ echo 'error' | sed 's/error/error2/; s/error/error3/' error32 ``` If we want to skip the second if the first succeeded, we can use the "test" command which branches if the previous substitution was successful. If we provide no label after `t`, it branches to the end, skipping all remaining commands: ``` $ echo 'error' | sed 's/error/error2/; t; s/error/error3/' error2 ``` ### Summary If you want to stop after the first substitution that succeeds, place a `t` command after each substitution command. ### More complex case Suppose that we want to skip the second but not the third substitution if the first succeeds. In that case, we need to supply a label to the `t` command: ``` $ echo 'error' | sed 's/error/error2/; ta; s/error/error3/; :a; s/error/error4/' error42 ``` In the above, `:a` defines label `a`. The command `ta` branches to label `a` if the preceeding `s` command succeeds. ### Compatibility The above code was tested in GNU sed. I am told that BSD sed does not accept `;` as a command separator after a label. Thus, on BSD/macOS, try: ``` echo 'error' | sed -e 's/error/error2/' -e ta -e 's/error/error3/' -e :a -e 's/error/error4/' ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: You can apply boolean logic to matches with `|`, `&` and `!`. Solution -------- (not sure if the syntax is compatible with your system so you may need to add more backslashes) ``` "s|\(Error=\(.*&!errcode1&!errcode2\)\)|\1<\_blue>|" ``` Other notes ----------- `sed` can use any character as a delimiter, so all of the following expressions are equivalent: ``` "s/foo/bar/" "s:foo:bar:" "s|foo|bar|" "s#foo#bar#" ``` Also, if you are using `bash` on a Unix-based system, you can use shell variables if you're running this from a script (since your patterns are enclosed with `"` and not `'`, there's a difference). ``` PREFIX="Error=" TARGET_1="errorcode1" TARGET_2="errorcode2" SUB_1="\1<\_red>" SUB\_2="\1<\_orange>" SUB\_3="\1<\_blue>" sed -e "s|\($PREFIX$TARGET\_1\)|$SUB\_1|" \ -e "s|\($PREFIX$TARGET\_2\)|$SUB\_2|" \ -e "s|\($PREFIX\(.\*&!$TARGET\_1&!$TARGET\_2\)\)|$SUB\_3|" \ ``` Upvotes: 0 <issue_comment>username_3: If the other errorcodes follow the naming scheme errcodeN, you can negate the 1,2: ``` sed -e "s|\(Error=errcode1\)|\1<\_red>|" \ -e "s|\(Error=errcode2\)|\1<\_orange>|" \ -e "s|\(Error=errcode[^12]\)|\1<\_blue>|" ``` If the codes exceed number 9: `[^12]+` Upvotes: 0 <issue_comment>username_4: This is not a good application for sed, you should use awk instead. You didn't provide any sample input/output to test against so this is obviously untested but you'd do something like this: ``` awk ' BEGIN { colors["errorcode1"] = "red" colors["errorcode2"] = "orange" colors["default"] = "blue" } match($0,/(.*Error=)([[:alnum:]]+)(.*)/,a) { code = a[2] color = (code in colors ? colors[code] : colors["default"]) $0 = sprintf("%s<%s>%s<_%s>%s", a[1], color, code, color, a[3]) } { print } ' ``` The above uses GNU awk for the 3rd arg to match(), it's a minor tweak for other awks. Upvotes: 0
2018/03/16
282
825
<issue_start>username_0: Here is my query: ``` select col1, col2, (as col3) from ``` may return `NULL`, in which case I want to set `col3` to `col2`. I know I could do this with a `CASE` statement: ``` select col1, col2, CASE WHEN () is NULL col2 ELSE () END AS col3 from ``` However, that approach executes twice. What are some options if I only want to execute once?<issue_comment>username_1: You could use subquery and `COALESCE`: ``` SELECT col1, col2, COALESCE(col3, col2) AS col3 FROM (select col1, col2, () as col3 from ) AS sub; ``` Without subquery: ``` SELECT col1, col2, COALESCE(, col2) AS col3 FROM ; ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Similar to the other answer, but eliminates the need for a subquery: ``` select col1, col2, COALESCE(, col2) AS col3 from ``` Upvotes: 0
2018/03/16
249
922
<issue_start>username_0: I'm working on a legacy webforms app. I've added a new radmenuitem to an existing radmenu (see link2 below) but I can't see it when I compile and run the page. The existing link, Link1, appears just fine. I can even change Link1 and see the changes when testing. Link2 doesn't show. I've tried forcing the page to update by making small changes to the code behind but that doesn't work. ``` ``` I'm using Visual Studio 2017. Asp.net webforms.<issue_comment>username_1: You need to add `tag` after `RadMenu`. Look into your final code. ``` ``` Upvotes: 2 <issue_comment>username_2: It turned out to be in the code. A custom security feature was hiding certain menu items using the VB.Net code below: ``` For Each mItem As RadMenuItem In Menu1.Items If mItem.Value.ToUpper() <> "HELP" Then mItem.Visible = False End If Next ``` Upvotes: 2 [selected_answer]
2018/03/16
2,000
7,476
<issue_start>username_0: I am trying to setup a SPA where I can pass data from my appsettings.json to my clientside on server render. I followed the steps to configure SSR using the new SpaServices Templates <https://learn.microsoft.com/en-us/aspnet/core/spa/angular> However I am not understanding how to accomplish the task labeled here > > You can pass this to other parts of your app in any way supported by Angular > > > I see the example that is used is base\_url, but it seems that base is injected into the page as a DOM element ``` ``` But it is not clear how to read other items in this manner. I am testing passing whether the app is Https or not, I have this section in my Startup.cs ``` options.SupplyData = (context, data) => { data["isHttpsRequest"] = context.Request.IsHttps; }; ``` and in main.server.ts ``` { provide: 'isHttps', useValue: params.data.isHttpsRequest } ``` but it is in main.ts that I get lost, I have something like this ``` export function getBaseUrl() { return document.getElementsByTagName('base')[0].href; } export function getIsHttps() { // NO idea what needs to go here return ""; } const providers = [ { provide: 'BASE_URL', useFactory: getBaseUrl, deps: [] }, { provide: 'isHttps', useFactory: getIsHttps, deps: [] } ]; ``` I am not sure how SpaServices injects the value on Prerender into the app (I looked through the code but it isn't clear to me). Is the value put on window? How do I read the value so I can inject into constructors on components?<issue_comment>username_1: So I dove into this and tried to make it work, but unfortunately it's not meant to work like you or I thought it should work. "You can pass this to other parts of your app in any way supported by Angular" means it needs to be accessible to client side code either by using the global namespace or some other means (which I can't think of). When the app is rendered through the main.server.ts, you can embed variables from the server, but that doesn't mean they are available to the client app automatically...kind of a half baked solution in my opinion, and you and I are not the only ones confused by this (see <https://github.com/aspnet/JavaScriptServices/issues/1444>). I recommend you look for other methods of including server side data, like a config service on your application bootstrap that gets the config on startup or embedding a JSON object in the Angular hosting page that is written by the server side code. Upvotes: 0 <issue_comment>username_2: Found an answer. You can use Angular's offical [TransferState](https://angular.io/api/platform-browser/TransferState). You'll need to modify the following files **app.server.module.ts** *(Add ServerTransferStateModule)* ```js import { NgModule } from '@angular/core'; import { ServerModule, ServerTransferStateModule } from '@angular/platform-server'; import { AppModule } from './app.module'; import { AppComponent } from './app.component'; @NgModule({ imports: [ AppModule, ServerModule, ServerTransferStateModule ], bootstrap: [AppComponent], }) export class AppServerModule { } ``` **app.module.ts** *(Add BrowserTransferStateModule)* ```js import { BrowserModule, BrowserTransferStateModule } from '@angular/platform-browser'; import { HttpClientModule } from '@angular/common/http'; import { NgModule } from '@angular/core'; import { AppComponent } from './app.component'; @NgModule({ declarations: [ AppComponent ], imports: [ BrowserModule.withServerTransition({ appId: 'universal-demo-v5' }), HttpClientModule, BrowserTransferStateModule ], providers: [], bootstrap: [AppComponent] }) export class AppModule { } ``` This adds the modules needed to use TransferState. You'll then need to provide the value sent by the dotnet Server on `main.server.ts` and add a dummy provider on `main.ts` so as to not raise a conflict. **main.server.ts** ```js import "zone.js/dist/zone-node"; import "reflect-metadata"; import { renderModule, renderModuleFactory } from "@angular/platform-server"; import { APP_BASE_HREF } from "@angular/common"; import { enableProdMode } from "@angular/core"; import { provideModuleMap } from "@nguniversal/module-map-ngfactory-loader"; import { createServerRenderer } from "aspnet-prerendering"; export { AppServerModule } from "./app/app.server.module"; enableProdMode(); export default createServerRenderer(params => { const { AppServerModule, AppServerModuleNgFactory, LAZY_MODULE_MAP } = (module as any).exports; const options = { document: params.data.originalHtml, url: params.url, extraProviders: [ provideModuleMap(LAZY_MODULE_MAP), { provide: APP_BASE_HREF, useValue: params.baseUrl }, { provide: "BASE_URL", useValue: params.origin + params.baseUrl}, //Added provider, ServerParams is the data sent on Startup.cs { provide: "SERVER_PARAMS", useValue: params.data.ServerParams } ] }; const renderPromise = AppServerModuleNgFactory ? /* AoT */ renderModuleFactory(AppServerModuleNgFactory, options) : /* dev */ renderModule(AppServerModule, options); return renderPromise.then(html => ({ html })); }); ``` **main.ts** *(Add dummy provider and bootstrap app only when DOM finished loading. This will give us the time to fully save object on TransferState)* ```js import { enableProdMode } from "@angular/core"; import { platformBrowserDynamic } from "@angular/platform-browser-dynamic"; import { AppModule } from "./app/app.module"; import { environment } from "./environments/environment"; export function getBaseUrl() { return document.getElementsByTagName("base")[0].href; } const providers = [ { provide: "BASE_URL", useFactory: getBaseUrl, deps: [] }, { provide: "SERVER_PARAMS", useValue: "" } ]; if (environment.production) { enableProdMode(); } document.addEventListener("DOMContentLoaded", () => { platformBrowserDynamic(providers) .bootstrapModule(AppModule) .catch(err => console.log(err)); }); ``` Finally, we save the server provided object on `app.component.ts` on the TransferState **app.component.ts** ```js import { Component, Inject } from "@angular/core"; import { TransferState, makeStateKey } from "@angular/platform-browser"; const SERVER_PARAMS = makeStateKey("serverParams"); @Component({ selector: "app-root", templateUrl: "./app.component.html", styleUrls: ["./app.component.css"] }) export class AppComponent { serverParams: any; // Provider set on main.ts and main.server.ts constructor(@Inject("SERVER_PARAMS") private stateInj: string, private state: TransferState) { this.serverParams = this.state.get(SERVER_PARAMS, null as any); if (!this.serverParams) { this.state.set(SERVER_PARAMS, stateInj as any); } } } ``` **THATS IT** Now you can use Transfer state on any component to get the data as so *home.component.ts* ```js import { Component, Inject } from "@angular/core"; import { TransferState, makeStateKey } from "@angular/platform-browser"; const SERVER_PARAMS = makeStateKey("serverParams"); @Component({ selector: "app-home", templateUrl: "./home.component.html" }) export class HomeComponent { serverParameter = "PLACEHOLDER"; constructor(public state: TransferState) {} ngOnInit() { this.serverParameter = this.state.get(SERVER_PARAMS, ""); } } ``` Upvotes: 2
2018/03/16
1,762
6,586
<issue_start>username_0: I need to accomplish something very simple: copy a complete column to the next column to the right in the same worksheet (I have around 300 of those columns in one sheet of a workbook) meaning that the macros has to copy every odd column in range to next even column so that I end up having a range full of duplicate columns. I understand that I need to use the following formula in part or in full: ``` cells(selection.row, columns.Count).end(xltoleft).offset(,1).select ``` What would be the complete macros though? Searched every available board and found only solutions with custom conditions. Mine should be really simple. Thank you for your input.<issue_comment>username_1: Try (might need some error handling). Rather than copying entire columns I am using column A to determine the last row of data in the sheet (you can change this) then I am looping the even columns setting them equal to the prior odd columns. ``` Option Explicit Sub test() Dim loopRange As Range Set loopRange = ThisWorkbook.ActiveSheet.Columns("A:AE") Dim lastRow As Long With ThisWorkbook.ActiveSheet lastRow = .Cells(.Rows.Count, "A").End(xlUp).Row End With Dim currentColumn As Long With loopRange For currentColumn = 2 To .Columns.Count Step 2 .Range(.Cells(1, currentColumn), .Cells(lastRow, currentColumn)) = .Range(.Cells(1, currentColumn - 1), .Cells(lastRow, currentColumn - 1)).Value Next currentColumn End With End Sub ``` If you know the last row: ``` Option Explicit Sub test() Dim loopRange As Range Set loopRange = ThisWorkbook.ActiveSheet.Columns("A:AE") Const lastRow As Long = 108 Dim currentColumn As Long With loopRange For currentColumn = 2 To .Columns.Count Step 2 .Range(.Cells(1, currentColumn), .Cells(lastRow, currentColumn)) = .Range(.Cells(1, currentColumn - 1), .Cells(lastRow, currentColumn - 1)).Value Next currentColumn End With End Sub ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: I'm not entirely sure I understood the issue, but please find below a suggestion. The code may be a bit messy since I used a recorded macro: ``` Sub CopyRows() Range("A1").Activate While Not IsEmpty(ActiveCell) ActiveCell.Columns("A:A").EntireColumn.Select Selection.Copy ActiveCell.Offset(0, 1).Columns("A:A").EntireColumn.Select Selection.Insert Shift:=xlToRight ActiveCell.Offset(0, 1).Range("A1").Select Wend End Sub ``` Upvotes: 0 <issue_comment>username_3: If you're hoping to essentially duplicate every column by inserting a copy of each column to the right I think you need the below code. i.e. this copies columns: ``` A | B | C --------- A | B | C 1 | 2 | 3 ``` to ``` A | B | C | D | E | F --------------------- A | A | B | B | C | C 1 | 1 | 2 | 2 | 3 | 3 ``` VBA --- ``` Option Explicit Sub CopyAllColsOneToRight() Dim ws As Worksheet Dim lastCol As Long Dim lastRow As Long Dim currentCopyCol As Long Application.ScreenUpdating = False 'optimise performance by not updating the screen as we move stuff Set ws = ActiveSheet lastCol = GetLastUsedColumn(ws).Column lastRow = GetLastUsedRow(ws).Row For currentCopyCol = lastCol To 1 Step -1 CopyColumnInsertRight ws, lastRow, currentCopyCol 'CopyColumn ws, lastRow, currentCopyCol, lastRow, currentCopyCol * 2 'CopyColumn ws, lastRow, currentCopyCol, lastRow, currentCopyCol * 2 - 1 Next End Sub Sub CopyColumnInsertRight(ByRef ws As Worksheet, fromLastRow, fromCol) Dim fromRange As Range Set fromRange = ws.Range(ws.Cells(1, fromCol), ws.Cells(fromLastRow, fromCol)) fromRange.Copy fromRange.Insert Shift:=XlDirection.xlToRight End Sub 'Sub CopyColumn(ByRef ws As Worksheet, fromLastRow, fromCol, toLastRow, toCol) ' Dim fromRange As Range ' Dim toRange As Range ' Set fromRange = ws.Range(ws.Cells(1, fromCol), ws.Cells(fromLastRow, fromCol)) ' Set toRange = ws.Range(ws.Cells(1, toCol), ws.Cells(toLastRow, toCol)) ' toRange.Value2 = fromRange.Value2 'End Sub Function GetLastUsedColumn(ByRef ws As Worksheet) As Range Set GetLastUsedColumn = ws.Cells.Find( _ What:="*" _ , After:=ws.Cells(1, 1) _ , LookIn:=XlFindLookIn.xlFormulas _ , LookAt:=XlLookAt.xlPart _ , SearchOrder:=XlSearchOrder.xlByColumns _ , SearchDirection:=XlSearchDirection.xlPrevious _ , MatchCase:=False _ ) End Function Function GetLastUsedRow(ByRef ws As Worksheet) As Range Set GetLastUsedRow = ws.Cells.Find( _ What:="*" _ , After:=ws.Cells(1, 1) _ , LookIn:=XlFindLookIn.xlFormulas _ , LookAt:=XlLookAt.xlPart _ , SearchOrder:=XlSearchOrder.xlByRows _ , SearchDirection:=XlSearchDirection.xlPrevious _ , MatchCase:=False _ ) End Function ``` Notes on the code: * We disable screen updating; this avoids refreshing the UI whilst the macro runs, making the process more efficient. * We get the last populated column so that instead of copying every column on the worksheet we can limit those copied to the ones which make a difference (i.e. much faster for spreadsheets using less that the full number of columns; which will be true of most) * We get the last populated row so that instead of copying entire columns we only copy populated rows. We could check for the last used cell per row, but that's likely less efficient since typically the last row will be the same for most columns in range. Also, when using the insert method this is required to ensure that `xlToRight` doesn't cause cells to be shifted into the wrong columns. * Our for loop has `Step -1` since if we went from left to right we'd overwrite columns to the right as we copied others (e.g. copying A to B overwrites what's in B, then when we copy B to C we're actually copying the copy). Instead we work backwards so that we're always copying to blank columns or to columns we've previously copied. * I've provided a commented out version which only copies values (faster than copying formats), and another version which uses `Insert` to create the new columns. One may perform better than the other, but I've not tested so far (NB: The copy has to copy twice as many cells as it doesn't keep the originals but creates 2 copies, whilst the insert method keeps the originals and inserts a copy to the right, but has the additional overhead of copying formatting data). Upvotes: 0
2018/03/16
840
3,302
<issue_start>username_0: I have a question about PKCE (RFC 7636). OAuth clients that use the authorization code grant have two components: (1) the portion on the resource owner's device that initiates the authorization request and (2) a redirection endpoint on a server that can accept and send HTTPS messages. The PKCE extension to OAuth has the clients do this: 1. Generate a cryptographic random string called a code\_verifier. 2. Create a SHA-256 digest of the code\_verifier and Base64-encode it. Send that along with the authorization request. 3. When the client gets the authorization code and sends it to the token endpoint for an access token, include the original code\_verifier value. Step 2 happens on the resource owner's device. Once the resource owner has approved the request, his/her browser is redirected to the client's redirection endpoint. Step 3 happens at the redirection endpoint. So the question is, how does the redirection endpoint know the code\_verifier value? It was generated on the resource owner's device.<issue_comment>username_1: > > So the question is, how does the redirection endpoint know the > code\_verifier value? It was generated on the resource owner's device. > > > Because the redirection endpoint effectively routes to an endpoint on the same device which called the authorise endpoint. It may be registered as a loopback redirection, a app-claimed redirection or a custom URL scheme but the device will route the redirect to the appropriate app or the app will be listening on the appropriate port for loopbacks. > > OAuth clients that use the authorization code grant have two > components: (1) the portion on the resource owner's device that > initiates the authorization request and (2) a redirection endpoint on > a server that can accept and send HTTPS messages. > > > Confidential clients have a redirection endpoint on a server that can accept and send HTTPS messages. Public clients do not - and native clients using PKCE are [still public clients](https://www.rfc-editor.org/rfc/rfc6749#section-2.1). Upvotes: 3 [selected_answer]<issue_comment>username_2: To build on the information provided, PKCE is designed to ensure that the redirect URI routes back to the requesting app, and not a malicious app via an Authorization Code Interception Attack. In this scenario, the legitimate app will know the verifier but the malicious app will not know the verifier. **PKCE Legitimate App Flow** A legitimate app flow looks like the following where the authorization token request is redirected back to the SystemBrowser and then back to the originating NativeApp. [![PKCE Legitimate app flow](https://i.stack.imgur.com/MkmvM.png)](https://i.stack.imgur.com/MkmvM.png) * Ref: <http://openid.net/2015/05/26/enhancing-oauth-security-for-mobile-applications-with-pkse/> **Authorization Code Interception Attack** A malicious app can be introduced to the OS. With, or without PKCE, the native app can receive the authorization code, but it will not know the verifier and thus cannot complete the token exchange. [![PKCE Malicious App Interception](https://i.stack.imgur.com/IIsIj.png)](https://i.stack.imgur.com/IIsIj.png) * Ref: <https://docs.wso2.com/display/IS520/Mitigating+Authorization+Code+Interception+Attacks> Upvotes: 1
2018/03/16
1,302
3,316
<issue_start>username_0: I have a 1d vector and want to generate a matrix based on the pairwise comparison of the vector in TensorFlow. I need to compare each element in the vector to all the others (including itself), and if they are identical, the corresponding matrix value will be 1, and -1 otherwise. For example, there is a vector of `[1,2,3,4,1]`, then the desired matrix is ``` [[1,-1,-1,-1,1], [-1,1,-1,-1,-1], [-1,-1,1,-1,-1], [-1,-1,-1,1,-1], [1,-1,-1,-1,1]]. ``` The question is how to generate such a matrix in TensorFlow.<issue_comment>username_1: I don't know if TensorFlow has anything like this builtin, but there's a pretty straightforward approach in NumPy. It works by taking all products of the elements, and picking out locations where the product of two elements `x` and `y` is equal to `x ** 2.0`. Given a vector ``` v = np.array((1, 2, 3, 4, 1)).reshape(-1, 1) # shape == (5, 1) ``` you can construct the "similarity" matrix you want by doing: ``` sim = np.where(v.dot(v.T) == np.square(v), 1, -1) ``` `sim` will look like so: ``` array([[ 1, -1, -1, -1, 1], [-1, 1, -1, -1, -1], [-1, -1, 1, -1, -1], [-1, -1, -1, 1, -1], [ 1, -1, -1, -1, 1]]) ``` Upvotes: 1 <issue_comment>username_2: Idea ---- To compute pairwise op you can do the following trick: expand the vector to two 2-dimensional vectors: `[n, 1]` and `[1, n]`, and apply the op to them. Due to broadcasting it will produce the `[n, n]` matrix filled with op results for all pairs inside the vector. In your case, an op is a comparison, but it can be any binary operation. Tensorflow ---------- For illustration, here are two one-liners. The first one produces the boolean pairwise matrix, the second one - the matrix of `-1` and `1` (what you asked). ```py import tensorflow as tf tf.InteractiveSession() v = tf.constant([1, 2, 3, 4, 1]) x = tf.equal(v[:, tf.newaxis], v[tf.newaxis, :]) print(x.eval()) x = 1 - 2 * tf.cast(x, tf.float32) print(x.eval()) ``` Result: ```py [[ True False False False True] [False True False False False] [False False True False False] [False False False True False] [ True False False False True]] [[ 1 -1 -1 -1 1] [-1 1 -1 -1 -1] [-1 -1 1 -1 -1] [-1 -1 -1 1 -1] [ 1 -1 -1 -1 1]] ``` Numpy ----- The same in numpy is even simpler using `np.where`: ```py import numpy as np v = np.array([1, 2, 3, 4, 1]) x = v[:, np.newaxis] == v[np.newaxis, :] print(x) x = np.where(x, 1, -1) print(x) ``` Output is the same: ```py [[ True False False False True] [False True False False False] [False False True False False] [False False False True False] [ True False False False True]] [[ 1 -1 -1 -1 1] [-1 1 -1 -1 -1] [-1 -1 1 -1 -1] [-1 -1 -1 1 -1] [ 1 -1 -1 -1 1]] ``` Upvotes: 2 <issue_comment>username_3: Here is a simple way to do it: ``` In [123]: x = tf.placeholder(tf.float32, shape=(1, 5)) In [124]: z = tf.equal(tf.matmul(tf.transpose(x), x), tf.square(x)) In [125]: y = 2 * tf.cast(z, tf.int32) - 1 In [126]: sess = tf.Session() In [127]: sess.run(y, feed_dict={x: np.array([1, 2, 3, 4, 1])[None, :]}) Out[127]: array([[ 1, -1, -1, -1, 1], [-1, 1, -1, -1, -1], [-1, -1, 1, -1, -1], [-1, -1, -1, 1, -1], [ 1, -1, -1, -1, 1]], dtype=int32) ``` Upvotes: 0
2018/03/16
672
2,149
<issue_start>username_0: This is the javascript to generate a random hex color: ``` '#'+(Math.random()*0xFFFFFF<<0).toString(16); ``` could anyone talk me through it? I understand how the Math.random works (as well as the to String at the end), but I don't understand the syntax after that. Questions I have: 1. How can Math.random() multiplied by F output a number? 2. What does the <<0 mean? 3. What does the parameter of 16 on toString mean? (does it mean no more than 16 letters?) Would really appreciate any help with this. Thanks, Raph<issue_comment>username_1: ### It looks like you picked this up on codegolf. > > How can Math.random() multiplied by F output a number? > > > It is not multiplied by F. `0xFFFFFF` is converted to `16777215`, as `0xFFFFFF` is just the hexadecimal way of writing `16777215`. > > What does the <<0 mean? > > > `<<` is a bitshift operator. `<<0` shifts all bits 0 places to the left (filler: 0). This doesn't make any sense though. In this case it just deletes any decimal places. > > What does the parameter of 16 on toString mean? (does it mean no more than 16 letters?) > > > The 16 is the parameter for the numeral system. (2 is binary, 8 is octal, 10 is decimal/normal, 16 is hexadecimal, etc.). Upvotes: 1 <issue_comment>username_2: The best way to generate random HEX color: ========================================== It contains of tow functions: 1. The fest one picks a random hex number. 2. and the second one creates an array with hex values. // Returns one possible value of the HEX numbers ``` function randomHex() { var hexNumbers = [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 'A', 'B', 'C', 'D', 'E', 'F' ] // picking a random item of the array return hexNumbers[Math.floor(Math.random() * hexNumbers.length)]; } // Genarates a Random Hex color function hexGenerator() { hexValue = ['#']; for (var i = 0; i < 6; i += 1) { hexValue.push(randomHex()); } return hexValue.join(''); } // print out the random HEX color document.write(hexGenerator()); ``` good luck Upvotes: 0
2018/03/16
792
3,334
<issue_start>username_0: I have a view that shows the details of a specific conference (name, description, price, etc). To show each info I use: `{{$conf->name}}, {{$conf->descrption}}, etc.` But in the case of price, the conference don't have a price. Each conference can have many registration types. Each registration type have a specific price. The price is a column of the RegistrationType table, not Conference table. In the conference details view I want to show the price range, for example, if the conference has 3 registration types and the minimum price of one of the registration types is "0" and the maximum is "10" I want to show in the view the range of the registration types of that conference, that is "0-10". But I'm not understanding what is the process to be possible to show in the view the range price of the registration types that exist for a specific conference, do you know? The Conference model and RegistrationType model are like: **Conference model:** ``` class Event extends Model { // A conference has many registration types public function registrationTypes(){ return $this->hasMany('App\RegistrationType', 'conference_id'); } } ``` **RegistrationType Model:** ``` class RegistrationType extends Model { public function conference(){ return $this->belongsTo('App\Conference'); } } ```<issue_comment>username_1: The sample code would be something like: ``` class Event extends Model { public function registrationTypes(){ return $this->hasMany('App\RegistrationType', 'conference_id'); } public function priceRangeAttribute() { $registrationTypes = $this->registrationTypes(); return $this->orderPriceRanges($registrationTypes); } private function orderPriceRanges($registrationTypes) { // process the collection and format the different prices (0, 2, 10) // into a price range like 0-10 } } ``` You would then access it like $conf->price\_range. Upvotes: 0 <issue_comment>username_2: I would do it this way (assuming that Event is in fact your Conference class) : ``` class Conference extends Model { protected $appends = ['price_range']; public function registrationTypes(){ return $this->hasMany('App\RegistrationType', 'conference_id'); } public function getPriceRangeAttribute() { return $this->registrationTypes()->get()->min('price') . ' - ' . $this->registrationTypes()->get()->max('price'); /* The first answer : * return $this->registrationTypes()->min('price') . ' - ' . $this->registrationTypes()->max('price'); * should also do it, and could even be better, because no * other results than the actual min and max are returned, * no collection fetched ! */ } } ``` Here you tell Laravel to [appends](https://laravel.com/docs/5.6/eloquent-serialization#appending-values-to-json) the **price\_range** attribute to every instance of **Conference**. The **price\_range** attribute is returned by the **getPriceRangeAttribute** [accessor](https://laravel.com/docs/5.6/eloquent-mutators#accessors-and-mutators) who check for the minimum and maximum found in the price column for every registrationType linked (returned by the hasMany relation) to your Conference. Upvotes: 2 [selected_answer]
2018/03/16
1,012
3,735
<issue_start>username_0: I am new to using templates with structures. My main goal is to have auto variable inside structure. More importantly, I am using a library function of Funct.Methd(const auto, &Handler). Here as you can see, in the place of the first argument, I would like to pass (based on some criteria) any const auto variable that is created by the user. I am expecting to have more than 20 variables, based on some criteria any 5 would be used. After doing some research, I found that I can use template for this purpose. Please find the sample code below: ``` #include #include template struct FTmr { T query; }; int main() { FTmr df; } ``` I am getting an error `use of undeclared identifier 'T'` when using the template with structures. First I need to resolve the above error and then I wanted to use const auto. I was hoping to instantiate FTmr with df name and then access query variable. ``` df.query=<> ``` I would appreciate any help.<issue_comment>username_1: All variables in C++ have fixed types from the moment they are declared. Templates are not types. Templates are instructions on how to make a type. A `const auto` variable has a type determined by what you assign to it: ``` const auto x = foo(); ``` the type of `x` is fixed, it is deduced by the fixed return type of `foo()`. The language just finds it for you. A member of a struct is a variable. It must have its type determined within the struct the moment it is declared. You cannot defer its type until it is used. ``` struct Foo { static const auto x = 3; }; ``` that works because `x`'s type can be determined at the spot where `x` is declared. In short, `const auto` doesn't work like that. It may be possible to do what you really want to do, but it might be *hard*, and you probably lack the vocabulary to describe what it is you really want. My advice would be to not fight against the language and go with a fixed type. Upvotes: 3 [selected_answer]<issue_comment>username_2: What is `T` in main? You have to define it before using it. The previous `T` is scoped to the `templated struct` above. Also you cannot use that way; You have to instantiate the templated-struct this way: ``` FTmr df; FTmr df; // ... ``` T is the templated parameter which means as if passing a variable to a function it means passing the `type` instead. So the compile will create an instance of the struct with the type passed int. * Functions take variables as parameters this way: ``` void foo(int x, char y, double z, struct foo& the foo, long* pBar, ...); ``` * Templates take `types` instead as parameters: ``` template< class T> void Power(T& x); template < typename U, typename V> class Baz{ U _uval; V _vVal; }; ``` So when instantiating the templated class / function the compiler will create an instance with that type: ``` Baz bistr; ``` An instance created by the compiler: ``` class Baz{ int _uval; std:: string _vVal; }; ``` * Keep in mind that at compile time there is no template or type T but instead a specific version of the class is created. Upvotes: 2 <issue_comment>username_3: **tl;dr: Templates don't read your mind. You still have to tell them what to do.** `const auto` is not a "variable type". `auto` is a keyword that enables the *deduction* of a type from some information that you give it. For example, it knows that `5` is an `int` (because them's the rules), so `auto x = 5;` is possible. In this case, you simply did not give it any information. How do you think it should know what to use for `T`? *We* don't even know! Do you know? What should `query` be? What should it do? Think it through, then let your computer know what you've decided. Upvotes: 0
2018/03/16
947
3,449
<issue_start>username_0: I have code something like this. ``` new Vue({ el:'#app', data:{ cars:[ { name:'', make:'' } ] }, methods:{ addToArray(nameOfTheArray){ //here nameOfTheArray is the string "cars" //so here I have to do something like this this.cars.push({ name:'', make:'' }) } } }) ``` My question is can I use that argument(nameOfTheArray) to tell in which array I want to push this object. I mean something like this? ``` this.nameOfTheArray.push({ name:'', make:'' }) ``` but this doesn't work. is there any way to use that string argument with this keyword??<issue_comment>username_1: All variables in C++ have fixed types from the moment they are declared. Templates are not types. Templates are instructions on how to make a type. A `const auto` variable has a type determined by what you assign to it: ``` const auto x = foo(); ``` the type of `x` is fixed, it is deduced by the fixed return type of `foo()`. The language just finds it for you. A member of a struct is a variable. It must have its type determined within the struct the moment it is declared. You cannot defer its type until it is used. ``` struct Foo { static const auto x = 3; }; ``` that works because `x`'s type can be determined at the spot where `x` is declared. In short, `const auto` doesn't work like that. It may be possible to do what you really want to do, but it might be *hard*, and you probably lack the vocabulary to describe what it is you really want. My advice would be to not fight against the language and go with a fixed type. Upvotes: 3 [selected_answer]<issue_comment>username_2: What is `T` in main? You have to define it before using it. The previous `T` is scoped to the `templated struct` above. Also you cannot use that way; You have to instantiate the templated-struct this way: ``` FTmr df; FTmr df; // ... ``` T is the templated parameter which means as if passing a variable to a function it means passing the `type` instead. So the compile will create an instance of the struct with the type passed int. * Functions take variables as parameters this way: ``` void foo(int x, char y, double z, struct foo& the foo, long* pBar, ...); ``` * Templates take `types` instead as parameters: ``` template< class T> void Power(T& x); template < typename U, typename V> class Baz{ U _uval; V _vVal; }; ``` So when instantiating the templated class / function the compiler will create an instance with that type: ``` Baz bistr; ``` An instance created by the compiler: ``` class Baz{ int _uval; std:: string _vVal; }; ``` * Keep in mind that at compile time there is no template or type T but instead a specific version of the class is created. Upvotes: 2 <issue_comment>username_3: **tl;dr: Templates don't read your mind. You still have to tell them what to do.** `const auto` is not a "variable type". `auto` is a keyword that enables the *deduction* of a type from some information that you give it. For example, it knows that `5` is an `int` (because them's the rules), so `auto x = 5;` is possible. In this case, you simply did not give it any information. How do you think it should know what to use for `T`? *We* don't even know! Do you know? What should `query` be? What should it do? Think it through, then let your computer know what you've decided. Upvotes: 0
2018/03/16
1,018
3,552
<issue_start>username_0: I have a web application where I am querying an InterSystems Cachè database. The query is: ``` SELECT TOP 10 "x_med_orders"."bnf_chapter","x_active_inpatients"."ward","x_active_inpatients"."lnkpid", "x_med_orders"."drug_description", "x_med_orders"."start_date", "x_med_orders"."discontinue_date", "x_med_orders"."stop_date" FROM ( "XXX_Super"."x_active_inpatients" "x_active_inpatients" INNER JOIN "XXX_Super"."x_med_orders" "x_med_orders" ON "x_active_inpatients"."lnkpid"="x_med_orders"."lnkpid")WHERE = "x_med_orders"."bnf_chapter" = 'xxx' ``` When I remove the where clause, the query runs perfectly fine. If I however include it I get the error below. This is my first time dealing with this database type. [error image](https://i.stack.imgur.com/tXVgY.png)<issue_comment>username_1: All variables in C++ have fixed types from the moment they are declared. Templates are not types. Templates are instructions on how to make a type. A `const auto` variable has a type determined by what you assign to it: ``` const auto x = foo(); ``` the type of `x` is fixed, it is deduced by the fixed return type of `foo()`. The language just finds it for you. A member of a struct is a variable. It must have its type determined within the struct the moment it is declared. You cannot defer its type until it is used. ``` struct Foo { static const auto x = 3; }; ``` that works because `x`'s type can be determined at the spot where `x` is declared. In short, `const auto` doesn't work like that. It may be possible to do what you really want to do, but it might be *hard*, and you probably lack the vocabulary to describe what it is you really want. My advice would be to not fight against the language and go with a fixed type. Upvotes: 3 [selected_answer]<issue_comment>username_2: What is `T` in main? You have to define it before using it. The previous `T` is scoped to the `templated struct` above. Also you cannot use that way; You have to instantiate the templated-struct this way: ``` FTmr df; FTmr df; // ... ``` T is the templated parameter which means as if passing a variable to a function it means passing the `type` instead. So the compile will create an instance of the struct with the type passed int. * Functions take variables as parameters this way: ``` void foo(int x, char y, double z, struct foo& the foo, long* pBar, ...); ``` * Templates take `types` instead as parameters: ``` template< class T> void Power(T& x); template < typename U, typename V> class Baz{ U _uval; V _vVal; }; ``` So when instantiating the templated class / function the compiler will create an instance with that type: ``` Baz bistr; ``` An instance created by the compiler: ``` class Baz{ int _uval; std:: string _vVal; }; ``` * Keep in mind that at compile time there is no template or type T but instead a specific version of the class is created. Upvotes: 2 <issue_comment>username_3: **tl;dr: Templates don't read your mind. You still have to tell them what to do.** `const auto` is not a "variable type". `auto` is a keyword that enables the *deduction* of a type from some information that you give it. For example, it knows that `5` is an `int` (because them's the rules), so `auto x = 5;` is possible. In this case, you simply did not give it any information. How do you think it should know what to use for `T`? *We* don't even know! Do you know? What should `query` be? What should it do? Think it through, then let your computer know what you've decided. Upvotes: 0
2018/03/16
879
3,237
<issue_start>username_0: I want to show the fixed row as a last line in the scollable table. The row can be outside the table but fixed to the table and it should not scroll when user scoll's the table. I tried using below css code but couldn't fix the issue. The row highlighted in red color i want to make it fixed and when user scrolls it shoudn't scroll and should be seen. css code: ``` #foo { position: absolute; bottom: 0; background: red; } ```<issue_comment>username_1: All variables in C++ have fixed types from the moment they are declared. Templates are not types. Templates are instructions on how to make a type. A `const auto` variable has a type determined by what you assign to it: ``` const auto x = foo(); ``` the type of `x` is fixed, it is deduced by the fixed return type of `foo()`. The language just finds it for you. A member of a struct is a variable. It must have its type determined within the struct the moment it is declared. You cannot defer its type until it is used. ``` struct Foo { static const auto x = 3; }; ``` that works because `x`'s type can be determined at the spot where `x` is declared. In short, `const auto` doesn't work like that. It may be possible to do what you really want to do, but it might be *hard*, and you probably lack the vocabulary to describe what it is you really want. My advice would be to not fight against the language and go with a fixed type. Upvotes: 3 [selected_answer]<issue_comment>username_2: What is `T` in main? You have to define it before using it. The previous `T` is scoped to the `templated struct` above. Also you cannot use that way; You have to instantiate the templated-struct this way: ``` FTmr df; FTmr df; // ... ``` T is the templated parameter which means as if passing a variable to a function it means passing the `type` instead. So the compile will create an instance of the struct with the type passed int. * Functions take variables as parameters this way: ``` void foo(int x, char y, double z, struct foo& the foo, long* pBar, ...); ``` * Templates take `types` instead as parameters: ``` template< class T> void Power(T& x); template < typename U, typename V> class Baz{ U _uval; V _vVal; }; ``` So when instantiating the templated class / function the compiler will create an instance with that type: ``` Baz bistr; ``` An instance created by the compiler: ``` class Baz{ int _uval; std:: string _vVal; }; ``` * Keep in mind that at compile time there is no template or type T but instead a specific version of the class is created. Upvotes: 2 <issue_comment>username_3: **tl;dr: Templates don't read your mind. You still have to tell them what to do.** `const auto` is not a "variable type". `auto` is a keyword that enables the *deduction* of a type from some information that you give it. For example, it knows that `5` is an `int` (because them's the rules), so `auto x = 5;` is possible. In this case, you simply did not give it any information. How do you think it should know what to use for `T`? *We* don't even know! Do you know? What should `query` be? What should it do? Think it through, then let your computer know what you've decided. Upvotes: 0
2018/03/16
1,494
4,348
<issue_start>username_0: An example 404 URL: `http://ip.address/static/admin/css/base.css` I'm not sure what I did wrong. Here's the associated files: settings.py ``` STATICFILES_DIRS = [ '/home/username/sitename/sitenameenv/lib/python3.5/site-packages/django/contrib/admin/static', ] STATIC_URL = '/static/' STATIC_ROOT = '/home/username/sitename/website/staticroot/' ``` Nginx config: ``` server { listen 80; server_name http://ip.address/; location = /favicon.ico { access_log off; log_not_found off; } location /static/ { root /home/username/sitename/website/; } location / { include proxy_params; proxy_pass http://unix:/home/username/sitename/sitename.sock; } } ``` The regular static files are being served up correctly, and they were doing so also before I added `STATIC_ROOT` and used `collectstatic`, which I didn't think was necessary. My admin is getting 404s for its static files though.<issue_comment>username_1: **UPDATE:** Running `collectstatic` will collect all your files in the directory specified by `STATIC_ROOT` settings. So, you'll need to configure your Nginx server to serve your `STATIC_ROOT` folder at the `/satic/` url: ``` location /static/ { # using `alias` instead of `root` # vvv alias /home/username/sitename/website/staticroot/; # ^^^ # the full path to the STATIC_ROOT directory } ``` --- **Old Version (not very useful):** **Why is this happening?** Here's one important thing to note: Django doesn't serve static files. I'm sure you know that. So, when your browser requests for an admin css file from - `http://ip.address/static/admin/css/base.css`, this request is intercepted by your Nginx server because you've mapped `/static/` path to this directory - `/home/username/sitename/website/static/`. Now, what happens next is, Nginx will try to find `/admin/css/base.css` file inside `/home/username/sitename/website/static/`. But there isn't any `admin` directory in there, hence this request ends up as a `404`. --- **How to fix this?** You can map `/static/admin/` url to that directory where the `admin` directory actually is. Example: ``` # put this before `location /static/` conf location /static/admin/ { root /home/username/sitename/sitenameenv/lib/python3.5/site-packages/django/contrib/admin/; } ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: I have the same problem. My nginx server on Centos 7.6 can't access to static folder in path `/home/user/app/mysyte/static/`. In `/var/log/nginx/error.log` same error ``` open() "/home/user/app/mysyte/static/*.css" failed (13: Permission denied) ``` For solving and understanding this problem `:=*` 1. run command `getenforce` 2. if enforcing - `cat /var/log/audit/audit.log | grep nginx` for me string with errrors looks like ``` type=AVC msg=audit(1558033633.723:201): avc: denied { read } for pid=7758 comm="nginx" name="responsive.css" dev="dm-0" ino=17312394 scontext=system_u:system_r:httpd_t:s0 tcontext=unconfined_u:object_r:user_home_t:s0 tclass=file permissive=0 type=SYSCALL msg=audit(1558033633.723:201): arch=c000003e syscall=2 success=no exit=-13 a0=564f710dd55d a1=800 a2=0 a3=68632f656d6f682f items=0 ppid=7757 pid=7758 auid=4294967295 uid=998 gid=996 euid=998 suid=998 fsuid=998 egid=996 sgid=996 fsgid=996 tty=(none) ses=4294967295 comm="nginx" exe="/usr/sbin/nginx" subj=system_u:system_r:httpd_t:s0 key=(null) ``` copy id of audit msg `1558033633.723:201` 3. run command `grep yours_audit_id /var/log/audit/audit.log | audit2why` output for me ``` [root@uwsgi ~]# grep 1558034479.384:221 /var/log/audit/audit.log | audit2why type=AVC msg=audit(1558034479.384:221): avc: denied { read } for pid=7758 comm="nginx" name="responsive.css" dev="dm-0" ino=17312394 scontext=system_u:system_r:httpd_t:s0 tcontext=unconfined_u:object_r:user_home_t:s0 tclass=file permissive=0 Was caused by: The boolean httpd_read_user_content was set incorrectly. Description: Allow httpd to read user content Allow access by executing: # setsebool -P httpd_read_user_content 1 ``` So as you can see answer here `setsebool -P httpd_read_user_content 1` when you run this command you see your static content Upvotes: 1
2018/03/16
324
1,157
<issue_start>username_0: In this excample ``` std::shared_ptr ptr = new obj("old"); ptr.reset(new obj("new")); ``` the constructor of the `obj("new")` will called first, then the destructor of the `obj("old")` will be called. Is there a way to destruct `obj("old")` first then construct `obj("new")` later? (other than call `ptr.reset()` first then call `ptr.reset(new obj("new"))`)<issue_comment>username_1: Sure. ``` ptr.reset(); ptr = std::make_shared("new"); ``` this doesn't match your "other than" clause (at least not exactly), and it destroys the old object first. I can produce pages of variations. There is no single-function API in `shared_ptr` that first destroys the contents of a shared pointer, then executes some code to construct a replacement. Upvotes: 4 [selected_answer]<issue_comment>username_2: No way to do it in a single call like `ptr.reset(new obj("new"))`, because the parameter passed to `reset` function will be evaluated before `reset` is entered. Whatever `reset` does, `new obj("new")` will be evaluated first. So you have to first call `reset()` and then pass a newly created object in a second step. Upvotes: 3
2018/03/16
589
1,831
<issue_start>username_0: I have a want to save my python script's result into a txt file. My python code ``` from selenium import webdriver bro = r"D:\Developer\Software\Python\chromedriver.exe" driver=webdriver.Chrome(bro) duo=driver.get('http://www.lolduo.com') body=driver.find_elements_by_tag_name('tr') for post in body: print(post.text) driver.close() ``` --- Some codes that I've tried ``` import subprocess with open("output.txt", "w") as output: subprocess.call(["python", "./file.py"], stdout=output); ``` I tried this code and it only makes a output.txt file and has nothing inside it ``` D:\PythonFiles> file.py > result.txt ``` Exception: > > UnicodeEncodeError: 'charmap' codec can't encode character '\u02c9' in > position 0: character maps to > > > and only prints out 1/3 of the results of the script into a text file.<issue_comment>username_1: You can try below code to write data to text file: ``` from selenium import webdriver bro = r"D:\Developer\Software\Python\chromedriver.exe" driver = webdriver.Chrome(bro) driver.get('http://www.lolduo.com') body = driver.find_elements_by_tag_name('tr') with open("output.txt", "w", encoding="utf8") as output: output.write("\n".join([post.text for post in body])) driver.close() ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: You can try this. This Is my Python Code: ``` from selenium import webdriver from selenium.webdriver.common.keys import Keys from time import sleep import time bro = r"D:\Developer\Software\Python\chromedriver.exe" driver = webdriver.Chrome(bro) driver.get('http://www.lolduo.com') body = driver.find_elements_by_tag_name('tr') .text with open('output15.txt', mode='w') as f: for post in body: print(post) f.write(post) time.sleep(2) driver.close() ``` Upvotes: -1
2018/03/16
837
2,984
<issue_start>username_0: Lets say I have a function that query some table `company.workers`, for example (pseudo code): ``` def sql_query(dept, prof, hireDate): q = """SELECT * FROM company.workers WHERE department = {0} AND profession = {1} AND hire_date > {2}""".format(dept, prof, hireDate) cur.execute(q) return cur ``` What if I want to allow the user the query only on `dept` and `prof`, with `hireDate` being optional? This is the solution I came up with: ``` def sql_query(dept, prof, *args): if args: q = """SELECT * FROM company.workers WHERE department = {0} AND profession = {1} AND hire_date > {2}""".format(dept, prof, args[0]) else: q = """SELECT * FROM company.workers WHERE department = {0} AND profession = {1} """.format(dept, prof) cur.execute(q) return cur #the function could be called as so: sql_query('20', 'Engineer', (2017-12-10)) ``` However I think this is subotimal. What if I want to allow several *optional* coloumns to query? If I make it two, I have 4 options to handle, which is a lot of `else-if` blocks to make. Is there a more efficient/elegant solution?<issue_comment>username_1: You don't say what SQL DBMS you're using, but here's a block of SQL Server-style code that accepts three variables: ``` @dept @prof @hireDate ``` The WHERE clause always uses @dept and @prof, and only uses @hireDate if it is not null. ``` SELECT * FROM company.workers WHERE department = @dept AND profession = @prof AND ( ( @hireDate IS NOT NULL AND hire_date = @hireDate ) OR @hireDate IS NULL ) ``` You could then add as many other option variables as needed using this same style, instead of writing separate SQL statements for each combination. Upvotes: 2 [selected_answer]<issue_comment>username_2: Similar question [here](https://stackoverflow.com/questions/15496051/dynamic-sql-where-clause-generation). You're right, you don't really want to be manually generating the statement. You could improve your current code by making it more dynamic using a dictionary: ``` def sql_query(**params): q = "SELECT * FROM company.workers" count=0 for i in non_require_param: if count==0: q += " WHERE {0} = {1} ".format(i, params[i]) else: q += " AND {0} = {1} ".format(i, params[i]) count += 1 cur.execute(q) return cur ``` Also, [sqlite's execute cursor](https://docs.python.org/2/library/sqlite3.html#sqlite3.Cursor.execute) is something to look into. It is cleaner than formatting the statement yourself and handles the datatype conversion: ``` who = "Yeltsin" age = 72 cur.execute("""select * from company.workers where name_last=:name_last and age=:age""", {"name_last": who, "age": age}) ``` Upvotes: 0
2018/03/16
871
3,244
<issue_start>username_0: I started learning Swift few weeks ago and in one lesson (Arrays and for .. in loops) I had to make func that counts votes and gives an answer. So I made this code thinking thats it but this error pops in -> "Type 'Bool' does not conform to protocol 'Sequence'" here's the code: ``` func printResults(forIssue: String, withVotes: Bool) -> String { positive = 0 negative = 0 for votes in withVotes { if votes == true { positive += 1 } else { negative += 1 } } return "\(forIssue) \(positive) yes, \(negative) no" } ``` The error pops in 4th line with 'withVotes' There are already some arrays that got Bool type values.<issue_comment>username_1: The compiler is right. You're trying to iterate through a bool value `withVotes` which will not work. The solution is to create an array of bool values. Like following ``` for i in [true, false, true] { if i == true { print("true") } } ``` Change your parameter `withVotes` from `Bool` to `[Bool]` and the compiler will be happy :) At the end and probably will look like that ``` func printResults(forIssue: String, withVotes: [Bool]) -> String { positive = 0 negative = 0 for votes in withVotes { if votes == true { positive += 1 } else { negative += 1 } } return "\(forIssue) \(positive) yes, \(negative) no" } ``` Upvotes: 0 <issue_comment>username_2: You need to pass in an array like this: ``` func printResults(forIssue: String, withVotes: [Bool]) -> String { positive = 0 negative = 0 for votes in withVotes { if votes == true { positive += 1 } else { negative += 1 } } return "\(forIssue) \(positive) yes, \(negative) no" } ``` Upvotes: 0 <issue_comment>username_3: Welcome to learning Swift! You've stumbled across something where the compiler is right, but as a beginner, it's not always evident on what's going on. In this case, although it's pointing to line 4 as the problem, that's not where you need to fix it. You need to go to the *source* of the problem, which in this case is line 1, here... ``` func printResults(forIssue: String, withVotes: Bool) -> String { ``` Specifically `withVotes: Bool`. The problem is because of the way you have it written, it's only allowing you to pass in a single boolean. By your question and the rest of your code, you clearly want to pass in several. To do that, simply make it a bool array, like this... `withVotes: [Bool]` (Note the square brackets.) Here's your updated code with the change on line 1, not line 4. Note I also updated the signature and variable names to be more 'swifty' if you will where the focus should always be on clarity: ``` func getFormattedResults(for issue: String, withVotes allVotes: [Bool]) -> String { var yesVotes = 0 var noVotes = 0 for vote in allVotes { if vote { yesVotes += 1 } else { noVotes += 1 } } return "\(issue) \(yesVotes) yes, \(noVotes) no" } ``` Hope this explains it a little more, and again, welcome to the Swift family! :) Upvotes: 3 [selected_answer]
2018/03/16
588
1,621
<issue_start>username_0: Suppose I have the following DataFrame: ``` >>> df val1 val2 val3 key 1 1 1 1 2 2 2 2 3 3 3 3 ``` Now I want to select columns `val1`, `val2`, and (here's the kicker:) `val4` ``` >>> df[["val1", "val2", "val4"]] KeyError: "['val4'] not in index" ``` What I would like: ``` >>> df.something(something) val1 val2 val4 key 1 1 1 NaN 2 2 2 NaN 3 3 3 NaN ```<issue_comment>username_1: IIUC `reindex` ``` df.reindex(columns=["val1", "val2", "val4"]) Out[431]: val1 val2 val4 key 1 1 1 NaN 2 2 2 NaN 3 3 3 NaN ``` Also `.loc` can do it , but will raise a warning : Passing list-likes to .loc or [] with any missing label will raise KeyError in the future, you can use .reindex() as an alternative. ``` df.loc[:,["val1", "val2", "val4"]] ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: Something like this should get you started: ``` import pandas as pd import numpy as np df = pd.DataFrame([[1, 1, 1], [2, 2, 2], [3, 3, 3]], columns=['val1', 'val2', 'val3']) def check_columns(df, values): temp = pd.DataFrame() for i in values: try: temp[i] = df[i] except: temp[i] = np.nan return temp print(check_columns(df, ['val1', 'val2', 'val3'])) print(check_columns(df, ['val1', 'val2', 'val4'])) ``` Gives: ``` val1 val2 val3 0 1 1 1 1 2 2 2 2 3 3 3 val1 val2 val4 0 1 1 NaN 1 2 2 NaN 2 3 3 NaN ``` Upvotes: 0
2018/03/16
1,497
5,414
<issue_start>username_0: I am trying to make a clicker game in HTML and JavaScript, and when I click the button to give me money, nothing happens. I tried printing it to the console, and it returned as NaN once, and once I clicked the button again, nothing happened. Here is the code: ``` [Click for Money](#) Money: 0 -------- document.getElementById("clicker").onclick= update(2); //document.cookie = "money="+money, time() + (10 \* 365 \* 24 \* 60 \* 60); var money = parseFloat(0); function update (amount) { console.log(parseFloat(amount)) money += parseFloat(amount); document.getElementById("para").innerHTML="Money: " + parseFloat(amount); return false; } ```<issue_comment>username_1: You're assigning the call to the update function as the onclick event handler. If you want to send an argument, you should bind it in the event handler like so: ``` document.getElementById("clicker").onclick = update.bind(this, 2); ``` Upvotes: 0 <issue_comment>username_2: This line should be as follows: ``` document.getElementById("clicker").onclick= update ``` ...not... ``` document.getElementById("clicker").onclick= update(2) ``` As you've written it, update is only called once, with a value of 2. You're getting NaN because update is called before you've zeroed `money`. By the way, this is sufficient: ``` var money = 0; ``` There's nothing to be gained by adding `parseFloat` in there. Upvotes: 1 <issue_comment>username_3: I don't see any NaN(when I run your code) but found one mistake and that is `document.getElementById("clicker").onclick= update(2);` here you are not assigning function to onclick but you calling function and return value of function(which is `undefined` for above case as we are not returning anything) is assigned to onclick(which is undefined). o assign function do following `document.getElementById("clicker").onclick= function(){update(2);}` ```html [Click for Money](#) Money: 0 -------- document.getElementById("clicker").onclick= function(){ update(2) }; //document.cookie = "money="+money, time() + (10 \* 365 \* 24 \* 60 \* 60); var money = parseFloat(0); function update (amount) { console.log(parseFloat(amount)) money += parseFloat(amount); console.log(money); document.getElementById("para").innerHTML="Money: " + parseFloat(amount); return false; } ``` Upvotes: 3 [selected_answer]<issue_comment>username_4: Rather than grabbing the element from the DOM and assigning the event, it might be advisable to explicitly wire the function on the onclick event in the HTML itself. Anyway, what you were doing was assigning a function call to an event. Anytime you have parentheses "()" after a function name, it gets invoked. Rather you have to assign the body of a function, which gets executed. For instance, your code ought to be -> ```js document.getElementById("clicker").onclick= function () {update(2)}; ``` Here is a code that might be working as you would have expected ```html [Click for Money](#) Money: 0 -------- var money = parseFloat(0); function update (amount) { console.log(parseFloat(amount)) money += parseFloat(amount); document.getElementById("para").innerHTML="Money: " + parseFloat(amount); return false; } ``` Upvotes: 0 <issue_comment>username_5: A few things wrong here. 1. As others pointed out you need to use an anonymous function on the onclick and pass the parameter to update in there 2. You should wrap only the number in a span to make the code parse the number easier in the update function. 3. You were never incrementing the previous value, only incrementing from 0 each time update is called. you need to get the previous value, parse it, and then add to it. ```js document.getElementById("clicker").onclick = function() { update(2) }; //document.cookie = "money="+money, time() + (10 * 365 * 24 * 60 * 60); var para = document.getElementById("para"); var money = document.getElementById("money"); function update(newAmount) { var oldMoney = parseFloat(money.textContent); var newMoney = oldMoney + parseFloat(newAmount); money.innerHTML = parseFloat(newMoney); } ``` ```html [Click for Money](#) Money: 0 -------- ``` Upvotes: 0 <issue_comment>username_6: Since you are incrementing the `money` variable it would seem that you also want that value to display rather than the `amount` being passed in: ``` document.getElementById("para").innerHTML="Money: " + parseFloat(money); ``` Not... ``` document.getElementById("para").innerHTML="Money: " + parseFloat(amount); ``` Upvotes: 0 <issue_comment>username_7: ``` [Click for Money](#) Money: 0 -------- //document.cookie = "money="+money, time() + (10 \* 365 \* 24 \* 60 \* 60); var money = parseFloat(0); function update (amount) { console.log(parseFloat(amount)) money += parseFloat(amount); console.log(money); document.getElementById("para").innerHTML="Money: " + money; return false; } ``` Upvotes: 0 <issue_comment>username_8: ```js document.getElementById("clicker").onclick= function(){ update(2) }; //document.cookie = "money="+money, time() + (10 * 365 * 24 * 60 * 60); var money = parseFloat(0); function update (amount){ money += amount; document.getElementById("para").innerHTML="Money: " + money; return false; } ``` ```html [Click for Money](#) Money: 0 -------- ``` Upvotes: 0
2018/03/16
2,737
8,802
<issue_start>username_0: I have this response with [httpie](https://github.com/jakubroztocil/httpie) when an user is logged: ``` chat-api$ http :3000/signup username=tomatito password=123 HTTP/1.1 201 Created Cache-Control: max-age=0, private, must-revalidate Content-Type: application/json; charset=utf-8 ETag: W/"01dfe24bd7415e252b5aee50e12198a3" Transfer-Encoding: chunked Vary: Origin X-Request-Id: a095148b-592a-4347-820f-63e1efa0e409 X-Runtime: 0.347726 { "auth_token": "<KEY>", "message": "Account created successfully" } ``` The object is persisted in my database. However when i make this request with axios from my vue.js form I get nothing in **localStorage** this is my **axios.js** code: ``` import axios from 'axios' const API_URL = process.env.API_URL || 'http://localhost:3000/' export default axios.create({ baseURL: API_URL, headers: { 'Content-Type': 'application/json', 'Authorization': 'Bearer ' + localStorage.auth_token } }) ``` the object is persisted in database right but i get `Authorization:Bearer undefined` these are my headers: Response: ``` Access-Control-Allow-Methods:GET, POST, PUT, PATCH, DELETE, OPTIONS, HEAD Access-Control-Allow-Origin:http://localhost:8081 Access-Control-Expose-Headers: Access-Control-Max-Age:1728000 Cache-Control:max-age=0, private, must-revalidate Content-Type:application/json; charset=utf-8 ETag:W/"fdac439f3ada9e343d0815bb49dff277" Transfer-Encoding:chunked Vary:Origin X-Request-Id:9e318050-ceca-480c-a847-d59f9ebb18b7 X-Runtime:0.447976 ``` Request: ``` Accept:application/json, text/plain, */* Accept-Encoding:gzip, deflate, br Accept-Language:en-US,en;q=0.9 Authorization:Bearer undefined Connection:keep-alive Content-Length:44 Content-Type:application/json Host:localhost:3000 Origin:http://localhost:8081 Referer:http://localhost:8081/ User-Agent:Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.75 Safari/537. ``` Request payload ``` {username: "tomatito", password: "<PASSWORD>"} password:"<PASSWORD>"username:"tomatito" ``` This is my vue script component: ``` export default { name: 'SignUp', data () { return { username: '', password: '', error: false } }, methods: { signup () { this.$http.post('/signup', { username: this.username, password: this.password }) .then(request => this.signupSuccessful(request)) .catch(() => this.signupFailed()) }, signupSuccessful (req) { if (!req.data.token) { this.signupFailed() return } localStorage.token = req.data.token this.error = false this.$router.replace(this.$route.query.redirect || '/rooms') }, signupFailed () { this.error = 'Sign up failed!' delete localStorage.token } } } ``` I'm getting **Sign up failed**, However the object is persisted in database. My back-end is ruby on rails. How can i receive in my `data.token` in payload? This is my **main.js** file ``` import Vue from 'vue' import App from './App' import router from './router' import axios from './backend/vue-axios' Vue.config.productionTip = false /* eslint-disable no-new */ new Vue({ el: '#app', router, components: { App }, axios, template: '' }) ``` This is my **vue-axios/index.js** file: ``` import Vue from 'vue' import VueAxios from 'vue-axios' import axios from './axios' Vue.use(VueAxios, axios) ``` **Updated** The problem was in **req**. it's **res** to receive the token instead of **req** ``` signup() { this.$http .post("/signup", { username: this.username, password: <PASSWORD> }) .then(res => this.signupSuccessful(res)) .catch(() => this.signupFailed()); }, signupSuccessful(res) { if (!res.data.auth_token) { this.signupFailed(); return; } this.error = false; localStorage.token = res.data.auth_token; this.$store.dispatch("login"); this.$router.replace(this.$route.query.redirect || "/rooms"); }, . . . . ``` Thank you<issue_comment>username_1: You're assigning the call to the update function as the onclick event handler. If you want to send an argument, you should bind it in the event handler like so: ``` document.getElementById("clicker").onclick = update.bind(this, 2); ``` Upvotes: 0 <issue_comment>username_2: This line should be as follows: ``` document.getElementById("clicker").onclick= update ``` ...not... ``` document.getElementById("clicker").onclick= update(2) ``` As you've written it, update is only called once, with a value of 2. You're getting NaN because update is called before you've zeroed `money`. By the way, this is sufficient: ``` var money = 0; ``` There's nothing to be gained by adding `parseFloat` in there. Upvotes: 1 <issue_comment>username_3: I don't see any NaN(when I run your code) but found one mistake and that is `document.getElementById("clicker").onclick= update(2);` here you are not assigning function to onclick but you calling function and return value of function(which is `undefined` for above case as we are not returning anything) is assigned to onclick(which is undefined). o assign function do following `document.getElementById("clicker").onclick= function(){update(2);}` ```html [Click for Money](#) Money: 0 -------- document.getElementById("clicker").onclick= function(){ update(2) }; //document.cookie = "money="+money, time() + (10 \* 365 \* 24 \* 60 \* 60); var money = parseFloat(0); function update (amount) { console.log(parseFloat(amount)) money += parseFloat(amount); console.log(money); document.getElementById("para").innerHTML="Money: " + parseFloat(amount); return false; } ``` Upvotes: 3 [selected_answer]<issue_comment>username_4: Rather than grabbing the element from the DOM and assigning the event, it might be advisable to explicitly wire the function on the onclick event in the HTML itself. Anyway, what you were doing was assigning a function call to an event. Anytime you have parentheses "()" after a function name, it gets invoked. Rather you have to assign the body of a function, which gets executed. For instance, your code ought to be -> ```js document.getElementById("clicker").onclick= function () {update(2)}; ``` Here is a code that might be working as you would have expected ```html [Click for Money](#) Money: 0 -------- var money = parseFloat(0); function update (amount) { console.log(parseFloat(amount)) money += parseFloat(amount); document.getElementById("para").innerHTML="Money: " + parseFloat(amount); return false; } ``` Upvotes: 0 <issue_comment>username_5: A few things wrong here. 1. As others pointed out you need to use an anonymous function on the onclick and pass the parameter to update in there 2. You should wrap only the number in a span to make the code parse the number easier in the update function. 3. You were never incrementing the previous value, only incrementing from 0 each time update is called. you need to get the previous value, parse it, and then add to it. ```js document.getElementById("clicker").onclick = function() { update(2) }; //document.cookie = "money="+money, time() + (10 * 365 * 24 * 60 * 60); var para = document.getElementById("para"); var money = document.getElementById("money"); function update(newAmount) { var oldMoney = parseFloat(money.textContent); var newMoney = oldMoney + parseFloat(newAmount); money.innerHTML = parseFloat(newMoney); } ``` ```html [Click for Money](#) Money: 0 -------- ``` Upvotes: 0 <issue_comment>username_6: Since you are incrementing the `money` variable it would seem that you also want that value to display rather than the `amount` being passed in: ``` document.getElementById("para").innerHTML="Money: " + parseFloat(money); ``` Not... ``` document.getElementById("para").innerHTML="Money: " + parseFloat(amount); ``` Upvotes: 0 <issue_comment>username_7: ``` [Click for Money](#) Money: 0 -------- //document.cookie = "money="+money, time() + (10 \* 365 \* 24 \* 60 \* 60); var money = parseFloat(0); function update (amount) { console.log(parseFloat(amount)) money += parseFloat(amount); console.log(money); document.getElementById("para").innerHTML="Money: " + money; return false; } ``` Upvotes: 0 <issue_comment>username_8: ```js document.getElementById("clicker").onclick= function(){ update(2) }; //document.cookie = "money="+money, time() + (10 * 365 * 24 * 60 * 60); var money = parseFloat(0); function update (amount){ money += amount; document.getElementById("para").innerHTML="Money: " + money; return false; } ``` ```html [Click for Money](#) Money: 0 -------- ``` Upvotes: 0
2018/03/16
813
2,630
<issue_start>username_0: I have a Raspberry Pi connected to a VPN via openvpn. Periodically, the connection drops, so I use the following script: ```sh #!/bin/bash ps -ef | grep -v grep | grep openvpn if [ $? -eq 1 ] ; then /sbin/shutdown -r now fi ``` I added it to crontab (using `sudo crontab -e`), I want the script to be executed every 5 minutes: ``` */5 * * * * /etc/openvpn/check.sh ``` The script doesn't work, but it still seems to be executed every five minutes: ``` tail /var/log/syslog | grep CRON ``` gives: ``` Mar 16 21:15:01 raspberrypi CRON[11113]: (root) CMD (/etc/openvpn/check.sh) ... ``` Moreover, when I run the script manually with `sudo ./check.sh`, the Pi reboots just like it should. I don't really understand what's going on here ? **Edit** : As suggested, I added the full path names and went from rebooting the Pi to restarting openvpn: ```sh #!/bin/bash if ! /bin/ps -ef | /bin/grep '[o]penvpn'; then cd /etc/openvpn/ /usr/sbin/openvpn --config /etc/openvpn/config.ovpn fi ``` The script still doesn't work, although it runs fine when I execute it myself. The script's permissions are 755, so it should be ok ?<issue_comment>username_1: The *path name* of the script matches the final `grep` so it finds itself, and is satisfied. The reason this didn't happen interactively was that you didn't run it with a full path. This is (a twist on) a very common FAQ. Tangentially, your script contains two very common antipatterns. You are reinventing `pidof` poorly, and you are examining `$?` explicitly. Unless you specifically require the exit code to be 1, you should simply be doing ``` if ! ps -ef | grep -q '[o]penvpn'; then ``` because the *purpose* of `if` is to run a command and examine its exit code; and notice also the trick to use a regex which doesn't match itself. But using `pidof` also lets you easily examine just the binary executable's file name, not its path. Upvotes: 2 <issue_comment>username_2: I finally understood why the script didn't work. Since it was located under `/etc/openvpn`, the condition `if ! ps -ef | grep -q '[o]penvpn'` wouldn't return `true` because of the script being executed. I noticed it when I changed the crontab line to: ``` */5 * * * * /etc/openvpn/check.sh >/home/pi/output 2>/home/pi/erroutput ``` the `output` file showed the `/etc/openvpn/check.sh` script being run. --- The script now is: ```sh #!/bin/bash if ! pidof openvpn; then cd /etc/openvpn/ /usr/sbin/openvpn --config /etc/openvpn/config.ovpn fi ``` and this works just fine. Thank you all. Upvotes: 1 [selected_answer]
2018/03/16
326
1,021
<issue_start>username_0: Why does the code below work (`&ds` is `12345678910`), but when I change `cats` to `cat`, `&ds` is just blank? I would expect changing `cats` to `cat` would mean `&ds` is `1 2 3 4 5 6 7 8 9 10`. ``` data new; length ds $500; ds = ""; do i = 1 to 10; ds = cats(ds, i, " "); end; call symputx('ds', ds); run; %put &ds ```<issue_comment>username_1: The function `cat()` will not trim the values so if you concatenate anything to `DS` and try to store it back into `DS` whatever you added is not stored because there is no room for it. It appears you actually want the `catx()` function. ``` ds = catx(' ',ds, i); ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: SAS tends to add leading and trailing spaces if you use the input buffer and doing text manipulation. you can use either the Strip() and catx() functions to remove leading and trailing spaces. With catx() you have the extra option of specifying a delimiter. ``` ds = cat(strip(ds), i, " "); ``` Upvotes: 0
2018/03/16
624
1,986
<issue_start>username_0: I have two ArrayLists (list1 & list2). I would like to see if any 1 (or more) of the objects in list2(which are strings) occur in list1. So, for some examples: ``` List list1 = Arrays.asList("ABCD", "EFGH", "IJKL", "QWER"); List list2 = Arrays.asList("ABCD", "1234"); //Should result in true, because "ABCD" is in list 1 & 2 ``` However, the method containsAll() does not work in this use case, as `1234` does not occur in list1, and will result in a result of false, also contains() does not work. Besides writing my own implementation(comparing all of the values individually from list2 to list1), is there a similar method to contains(), where a list of strings can be passed in, and compare that to another list, and return true if one or more values are contained in that list? **More Examples:** ``` ArrayList list1 = {1, 2, 3, 4, 5} ArrayList list2 = {1, 2, 3} --> True ArrayList list2 = {3, 2, 1} --> True ArrayList list2 = {5, 6, 7, 8, 9} --> True ArrayList list2 = {6, 7, 8, 9} --> False ```<issue_comment>username_1: Like `list2.stream().anyMatch(list1::contains)` in java 8. Upvotes: 5 [selected_answer]<issue_comment>username_2: ``` list2.removeAll(list1); ``` If you just need return boolean. Use removeAll(), it the length of list2 shrink, any item(s) in common would be removed. Upvotes: 1 <issue_comment>username_3: Pre-Java 8, you can use retainAll to achieve this quite simply: ``` Set intersect = new HashSet<>(listOne); intersect.retainAll(listTwo); return !intersect.isEmpty(); ``` Upvotes: 2 <issue_comment>username_4: There's no need to use streams for this task. Use the [`Collections.disjoint`](https://docs.oracle.com/javase/9/docs/api/java/util/Collections.html#disjoint-java.util.Collection-java.util.Collection-) method instead: ``` boolean result = !Collections.disjoint(list1, list2); ``` According to the docs: > > Returns `true` if the two specified collections have no elements in common. > > > Upvotes: 2
2018/03/16
869
2,695
<issue_start>username_0: I have a scilab function that looks something like this (very simplified code just to get the concept of how it works): ``` function [A, S, Q]=myfunc(a) A = a^2; S = a+a+a; if S > A then Q = "Bigger"; else Q = "Lower"; end endfunction ``` And I get the expected result if I run: ``` --> [A,S,Q]=myfunc(2) Q = Bigger S = 6. A = 4. ``` But if I put matrices into the function I expect to get equivalent matrices back as an answer with a result but instead I got this: ``` --> [A,S,Q]=myfunc([2 4 6 8]) Q = Lower S = 6. 12. 18. 24. A = 4. 16. 36. 64. ``` Why isn't `Q` returning matrices of values like `S` and `A`? And how do I achieve that it will return "Bigger. Lower. Lower. Lower." as an answer? That is, I want to perform the operation on each element of the matrix.<issue_comment>username_1: Because in your program you wrote `Q = "Bigger"` and `Q = "Lower"`. That means that `Q` will only have one value. If you want to store the comparisons for every value in `A` and `S`, you have to make Scilab do that. You can achieve such behavior by using loops. This is how you can do it by using two `for` loops: ``` function [A, S, Q]=myfunc(a) A = a^2; S = a+a+a; //Get the size of input a [nrows, ncols] = size(a) //Traverse all rows of the input for i = 1 : nrows //Traverse all columns of the input for j = 1 : ncols //Compare each element if S(i,j) > A(i,j) then //Store each result Q(i,j) = "Bigger" else Q(i,j) = "Lower" end end end endfunction ``` **Beware of `A = a^2`.** It can break your function. It has different behaviors if input `a` is a vector (1-by-n or n-by-1 matrix), rectangle matrix (m-by-n matrix, m ≠ n ), or square matrix (n-by-n matrix): * Vector: it works like `.^`, i.e. it raises each element individually ([see Scilab help](https://help.scilab.org/docs/6.0.0/en_US/dot.html)). * Rectangle: it won't work because it has to follow the rule of [matrix multiplication](https://en.wikipedia.org/wiki/Matrix_multiplication). * Square: it works and follows the rule of matrix multiplication. Upvotes: 1 <issue_comment>username_2: I will add that in Scilab, the fewer the number of loop, the better : so @username_1 answer may rewrite to ``` function [A, S, Q]=myfunc(a) A = a.^2; // used element wise power, see username_1 advice S = a+a+a; Q(S > A) = "Bigger" Q(S <= A) = "Lower" Q = matrix(Q,size(a,1),size(a,2)) // a-like shape endfunction ``` Upvotes: 1 [selected_answer]
2018/03/16
650
2,155
<issue_start>username_0: I would like to write a JavaScript library using ES6 modules to manage submodules. I would also like to be able to use it in both front-end applications (with babel and webpack) and NodeJS back-end projects. Is there a way to "build" the library, written with ES6 modules, in order to use it also as a NodeJS library?<issue_comment>username_1: Because in your program you wrote `Q = "Bigger"` and `Q = "Lower"`. That means that `Q` will only have one value. If you want to store the comparisons for every value in `A` and `S`, you have to make Scilab do that. You can achieve such behavior by using loops. This is how you can do it by using two `for` loops: ``` function [A, S, Q]=myfunc(a) A = a^2; S = a+a+a; //Get the size of input a [nrows, ncols] = size(a) //Traverse all rows of the input for i = 1 : nrows //Traverse all columns of the input for j = 1 : ncols //Compare each element if S(i,j) > A(i,j) then //Store each result Q(i,j) = "Bigger" else Q(i,j) = "Lower" end end end endfunction ``` **Beware of `A = a^2`.** It can break your function. It has different behaviors if input `a` is a vector (1-by-n or n-by-1 matrix), rectangle matrix (m-by-n matrix, m ≠ n ), or square matrix (n-by-n matrix): * Vector: it works like `.^`, i.e. it raises each element individually ([see Scilab help](https://help.scilab.org/docs/6.0.0/en_US/dot.html)). * Rectangle: it won't work because it has to follow the rule of [matrix multiplication](https://en.wikipedia.org/wiki/Matrix_multiplication). * Square: it works and follows the rule of matrix multiplication. Upvotes: 1 <issue_comment>username_2: I will add that in Scilab, the fewer the number of loop, the better : so @username_1 answer may rewrite to ``` function [A, S, Q]=myfunc(a) A = a.^2; // used element wise power, see username_1 advice S = a+a+a; Q(S > A) = "Bigger" Q(S <= A) = "Lower" Q = matrix(Q,size(a,1),size(a,2)) // a-like shape endfunction ``` Upvotes: 1 [selected_answer]
2018/03/16
1,818
4,815
<issue_start>username_0: I am building an accordion three level menu. I have zero knowledge in writing this other than what I've read through blogs on this site and others and copied from existing accordions. I was able to build a successful two level accordion but I cannot post two separate two level accordions on our sharepoint site - they both cancel each other out and won't open. So I combined them and built a three level accordion but I cannot get the third level to open. To simplify I cut down the amount of info from the html and removed sensitive links. Can anyone assist please? ```css $(document).ready(function(){ $("#nav > li > a").on("click", function(e){ if($(this).parent().has("ul")) { e.preventDefault(); } if(!$(this).hasClass("open")) { // hide any open menus and remove all other classes $("#nav li ul").slideUp(350); $("#nav li a").removeClass("open"); // open our new menu and add the open class $(this).next("ul").slideDown(350); $(this).addClass("open"); } else if($(this).hasClass("open")) { $(this).removeClass("open"); $(this).next("ul").slideUp(350); } }); }); ol, ul, li { padding: 0; } menu, nav, section { display: block; } ol, ul { list-style: none; } blockquote, q { quotes: none; } blockquote:before, blockquote:after, q:before, q:after { content: ''; content: none; } strong { font-weight: bold; } table { border-collapse: collapse; border-spacing: 0; } h1 { font-family: 'Merienda', 'Trebuchet MS', Verdana, sans-serif; font-size: 2.95em; line-height: 1.7em; margin-bottom: 20px; font-weight: bold; letter-spacing: -0.03em; color: #675d90; text-shadow: 2px 2px 0px rgba(255,255,255,0.65); text-align: center; } #w { display: block; width: 740px; margin: 0 auto; padding-top: 45px; } /\* nav menu styles \*/ #nav { display: block; width: 280px; margin: 0 auto; -webkit-box-shadow: 3px 2px 3px rgba(0,0,0,0.7); -moz-box-shadow: 3px 2px 3px rgba(0,0,0,0.7); box-shadow: 3px 2px 3px rgba(0,0,0,0.7); } #nav li { } #nav > li > a { display: block; padding: 16px 18px; font-size: 1.3em; font-weight: bold; color: #d4d4d4; text-decoration: none; border-bottom: 1px solid #212121; background-color: #343434; background: -webkit-gradient(linear, left top, left bottom, from(#214c7c), to(#284e7a)); background: -webkit-linear-gradient(top, #214c7c, #284e7a); background: -moz-linear-gradient(top, #214c7c, #284e7a); background: -ms-linear-gradient(top, #214c7c, #284e7a); background: -o-linear-gradient(top, #214c7c, #284e7a); background: linear-gradient(top, #214c7c, #284e7a); } #nav > li > a:hover, #nav > li > a.open { color: #e9e9e9; border-bottom-color: #384f76; background-color: #6985b5; background: -webkit-gradient(linear, left top, left bottom, from(#6985b5), to(#456397)); background: -webkit-linear-gradient(top, #6985b5, #456397); background: -moz-linear-gradient(top, #6985b5, #456397); background: -ms-linear-gradient(top, #6985b5, #456397); background: -o-linear-gradient(top, #6985b5, #456397); background: linear-gradient(top, #6985b5, #456397); } #nav li ul { display: none; background: #4a5b78; } #nav li ul li a { display: block; background: none; padding: 10px 0px; padding-left: 30px; font-size: 1.1em; text-decoration: none; font-weight: bold; color: #e3e7f1; text-shadow: 1px 1px 0px #000; } #nav li ul li a:hover { background: #394963; } ``` ```html * [Member](#) + [ATAAPS](#) - [LOG IN](www.google.com) - [google](www.google.com)- * [Supervisor](#) + [ATAAPS](#) - [LOG IN](www.google.com) - [Traumatic Injury](www.google.com)- ```<issue_comment>username_1: Here is one simple jquery 3 level Accordion: <https://stackoverflow.com/a/34652439/1384591> and one pure css multi level accordion: <https://medium.com/@AlexDevero/multi-level-sliding-accordion-only-with-css-1adc6ca6de16> PS. Regarding your code, I see 2 issues: the html code doesn't validate, and the javascript is applied only to $("#nav > li > a") that translates to the first level of links. <https://developer.mozilla.org/en-US/docs/Web/CSS/Child_selectors> Upvotes: 0 <issue_comment>username_2: I hope this helps, i made it rather quickly. But you could just assign an attribute to each one. ```js $('.header').click(function(){ var accordId = $(this).attr('accordID'); $('.content').slideUp(); $('.content[accordID="' + accordId + '"]').slideToggle(); }) ``` ```css .wrapper { width:200px; height:auto; float:left; background: #ebebeb; border:1px solid #d9d9d9; } .header { padding:10px; background:#666; color:#fff; } .content { display:none; } ``` ```html Header Content Header 2 Content 2 Header 3 Content 3 ``` Upvotes: 1
2018/03/16
780
2,215
<issue_start>username_0: Suppose I have the following: ``` df = pd.DataFrame({'a':range(2), 'b':range(2), 'c':range(2), 'd':range(2)}) ``` I'd like to "pop" two columns ('c' and 'd') off the dataframe, into a new dataframe, leaving 'a' and 'b' behind in the original df. The following does not work: ``` df2 = df.pop(['c', 'd']) ``` Here's my error: ``` TypeError: '['c', 'd']' is an invalid key ``` Does anyone know a quick, classy solution, besides doing the following? ``` df2 = df[['c', 'd']] df3 = df[['a', 'b']] ``` I know the above code is not *that* tedious to type, but this is why DataFrame.pop was invented--to save us a step when popping one column off a database.<issue_comment>username_1: This will have to be a two step process (you *cannot* get around this, because as rightly mentioned, `pop` works for a single column and returns a Series). First, slice `df` (step 1), and then drop those columns (step 2). ``` df2 = df[['c', 'd']].copy() df = df.drop(['c', 'd'], axis=1) ``` And here's the one-liner version using `pd.concat`: ``` df2 = pd.concat([df.pop(x) for x in ['c', 'd']], axis=1) ``` This is still a two step process, but you're doing it in one line. ``` df a b 0 0 0 1 1 1 df2 c d 0 0 0 1 1 1 ``` --- With that said, I think there's value in allowing `pop` to take a list-like of column headers appropriately returning a DataFrame of popped columns. This would make a good feature request for [GitHub](https://github.com/pandas-dev/pandas/issues), assuming one has the time to write one up. Upvotes: 6 [selected_answer]<issue_comment>username_2: Here's an alternative, but I'm not sure if it's more classy than your original solution: ``` df2 = pd.DataFrame([df.pop(x) for x in ['c', 'd']]).T df3 = pd.DataFrame([df.pop(x) for x in ['a', 'b']]).T ``` Output: ``` print(df2) # c d #0 0 0 #1 1 1 print(df3) # a b #0 0 0 #1 1 1 ``` Upvotes: 3 <issue_comment>username_3: new\_df = old\_df.loc[:,pop\_columns] Upvotes: 1 <issue_comment>username_4: If you *don't* want to **copy** your original pd.DataFrame, using list comprehension has nice code ``` list_to_pop = ['a', 'b'] [df.pop(col) for col in list_to_pop] ``` Upvotes: 1
2018/03/16
801
2,630
<issue_start>username_0: I built a Union query in Access that takes results from two separate queries and joins them together. It's working fine but the results are not being grouped by Sales, Cost, and Profit, where I am trying to sum on those three fields. Here is my code: ``` SELECT Store, count( [MASTER CREDIT MEMO QUERY].[Count]) as Count, [Sales Code], Name, Sum( [MASTER CREDIT MEMO QUERY].[Sales]) as Sales, Sum( [MASTER CREDIT MEMO QUERY].[Cost]) as Cost, Sum( [MASTER CREDIT MEMO QUERY].[Profit]) as Profit FROM [MASTER CREDIT MEMO QUERY] GROUP by Store, [Sales Code], Name UNION SELECT Store, count([MASTER SALES INVOICE QUERY].[Count]) as Count, [Sales Code], Name, Sum([MASTER SALES INVOICE QUERY].[Sales]) as Sales, Sum([MASTER SALES INVOICE QUERY].[Cost]) as Cost, Sum([MASTER SALES INVOICE QUERY].[Profit]) as Profit FROM [MASTER SALES INVOICE QUERY] GROUP BY Store, [Sales Code], Name ORDER BY Sales DESC; ``` Can anyone help me get the grouping to work?<issue_comment>username_1: You are grouping the separate selects in the union, rather than the result of the union itself. You should put the group by outside of the whole query, as well as the group functions. Please try this one (I'm not familiar with ms-access syntax, but it should work): ``` SELECT [Store], count([Count]) as [Count], [Sales Code], Name, Sum( [Sales]) as Sales, Sum( [Cost]) as Cost, Sum( [Profit]) as Profit FROM (SELECT Store, [Count], Sales, Cost, Profit, Name, [Sales Code] FROM [MASTER CREDIT MEMO QUERY] UNION SELECT Store, [Count], Sales, Cost, Profit, Name, [Sales Code] FROM [MASTER SALES INVOICE QUERY]) t GROUP BY Store, [Sales Code], Name ORDER BY Sales DESC ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: Alex - thanks for your help. Your solution didn't work exactly, but you got me 95% of the way. I took your code and modified it after trying to figure out exactly what it was doing. Here is the solution, in case you wanted to know. Thanks so much for your assistance. ``` SELECT TBL1.Store, Count(TBL1.Count) AS [Count], TBL1.[Sales Code], TBL1.Name, Sum(TBL1.Sales) AS Sales, Sum(TBL1.Cost) AS Cost, Sum(TBL1.Profit) AS Profit FROM (SELECT Store, Count, [MASTER CREDIT MEMO QUERY].Sales, Cost, Profit, Name, [Sales Code] FROM [MASTER CREDIT MEMO QUERY] UNION SELECT Store, Count, [MASTER SALES INVOICE QUERY].Sales, Cost, Profit, Name, [Sales Code] FROM [MASTER SALES INVOICE QUERY]) AS TBL1 GROUP BY TBL1.Store, TBL1.[Sales Code], TBL1.Name ORDER BY TBL1.Name; ``` Upvotes: 0
2018/03/16
708
2,460
<issue_start>username_0: So from what I have read/researched and applied in my programs I have not put any output specific methods in my classes. As in it is bad practice to do so. For example for when creating a method I do this: ``` public string Greeting() { return $"Hello {Name}"; //Name is a property of the class } ``` Instead of doing this: ``` public void Greeting() { Console.WriteLine($"Hello {Name}"); //Name is a property of the class } ``` **Question:** What is the best practice to output a message to the console when a constructor runs and needs to return a message? I want to keep my class portable between different application types. Do I have to create a `Console.WriteLine()` and just remove that if I move my class to another type of app? That would go against best practice right?<issue_comment>username_1: You are grouping the separate selects in the union, rather than the result of the union itself. You should put the group by outside of the whole query, as well as the group functions. Please try this one (I'm not familiar with ms-access syntax, but it should work): ``` SELECT [Store], count([Count]) as [Count], [Sales Code], Name, Sum( [Sales]) as Sales, Sum( [Cost]) as Cost, Sum( [Profit]) as Profit FROM (SELECT Store, [Count], Sales, Cost, Profit, Name, [Sales Code] FROM [MASTER CREDIT MEMO QUERY] UNION SELECT Store, [Count], Sales, Cost, Profit, Name, [Sales Code] FROM [MASTER SALES INVOICE QUERY]) t GROUP BY Store, [Sales Code], Name ORDER BY Sales DESC ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: Alex - thanks for your help. Your solution didn't work exactly, but you got me 95% of the way. I took your code and modified it after trying to figure out exactly what it was doing. Here is the solution, in case you wanted to know. Thanks so much for your assistance. ``` SELECT TBL1.Store, Count(TBL1.Count) AS [Count], TBL1.[Sales Code], TBL1.Name, Sum(TBL1.Sales) AS Sales, Sum(TBL1.Cost) AS Cost, Sum(TBL1.Profit) AS Profit FROM (SELECT Store, Count, [MASTER CREDIT MEMO QUERY].Sales, Cost, Profit, Name, [Sales Code] FROM [MASTER CREDIT MEMO QUERY] UNION SELECT Store, Count, [MASTER SALES INVOICE QUERY].Sales, Cost, Profit, Name, [Sales Code] FROM [MASTER SALES INVOICE QUERY]) AS TBL1 GROUP BY TBL1.Store, TBL1.[Sales Code], TBL1.Name ORDER BY TBL1.Name; ``` Upvotes: 0
2018/03/16
1,491
5,010
<issue_start>username_0: I am trying to map a .img file, and I am not sure why my code is not working. Here is my code, and when I run the code I keep getting my error, that p is equal to MAP\_FAILED ``` int diskinfo(int argc, char* argv[]){ void *p; char *size if (argc < 2) { printf("Please put ./diskinfo \n"); exit(1); } int fp = open(argv[1],"rb+"); if(fp == NULL) { printf("Error opening file"); exit(1); } struct stat buf; fstat(fp, &buf); p = mmap(NULL,buf.st\_size, PROT\_READ, MAP\_PRIVATE, fp, 0); if(p == MAP\_FAILED){ printf("Error mapping memory\n"); exit(1); }} ``` If anyone has any suggestions on where my code is wrong or if I am missing a piece of information I would be very grateful. Changing to perror() does not work. Also changing this function doesn't change the fact that p is still equal to MAP\_FAILED ``` if(p == MAP_FAILED){ return; } ``` I changed the below what is the solution: ``` int fp = open(argv[1],O_RDWR); if(fp < 0){ . . . ``` But I am still returning<issue_comment>username_1: Your code is not working at the beginning, check your `open` call. If you compiling with proper flags like `-Wall -Werror` you should get warnings like this: ``` error: passing argument 2 of ‘open’ makes integer from pointer without a cast [-Werror] note: expected ‘int’ but argument is of type ‘char *’ ``` You should designate `open` and `fopen` functions, they are different things. ``` int open(const char *pathname, int flags); ``` Upvotes: 2 <issue_comment>username_2: It is unclear from the state of your question whether you were ever able to get `mmap` to work. Your last edit added: ``` int fp = open(argv[1],O_RDWR); ``` Which is fine if you are writing back to the file you have opened, but if not, you should open using `O_RDONLY` to prevent inadvertent modification of your original file. While not an error, `fp` is generally used as a *file pointer* associated with file stream operations when the file is opened with `fopen`. Here you are using low-level I/O with `read/write` which uses a *file descriptor* instead of a *stream pointer*. When referencing a descriptor, the general vernacular uses `fd` as short-hand for *file descriptor*. (personally, it was awkward to see the two used in a interchanged manner -- which I suspect is the case for others as well) Your remaining use of `fstat`, the resulting `buf.st_size` and your call to `mmap` are not the problem. Your problem lies elsewhere -- which is one of the primary reasons you should post [**A Minimal, Complete, and Verifiable Example (MCVE)**](http://stackoverflow.com/help/mcve). That said, to insure you have incorporated your changes in the proper manner, I'll leave you with a simple example that `mmap`s a file and simply dumps the file to `stdout` (so limit your input filenames to a reasonably short text file to work with the example -- otherwise you will see all sorts of strange characters). Work through the following: ``` #include #include #include #include #include #include #include #include #include #include int diskinfo (int argc, char \*argv[]) { char \*p = NULL; /\* pointer to mmapped file \*/ int fd = 0; /\* file descriptor \*/ struct stat buf = {0}; /\* struct stat \*/ ssize\_t size = 0; /\* file size (typed for write return) \*/ if (argc < 2) { /\* validate at least 2 arguments \*/ printf ("Please put %s \n", argv[0]); exit (EXIT\_FAILURE); } if ((fd = open (argv[1], O\_RDONLY)) == -1) { /\* open/validate file \*/ perror ("Error opening file"); exit (EXIT\_FAILURE); } if (fstat (fd, &buf) == -1) { /\* stat file for size \*/ perror ("error: fstat buf"); exit (EXIT\_FAILURE); } size = buf.st\_size; /\* get file size \*/ /\* mmap file and validate return \*/ if ((p = mmap (NULL, buf.st\_size, PROT\_READ, MAP\_PRIVATE, fd, 0)) == (void \*) -1) { perror ("mmap failed"); exit (EXIT\_FAILURE); } /\* simple example, output mmapped file to stdout \*/ if (write (STDOUT\_FILENO, p, size) != size) { perror ("error on write"); exit (EXIT\_FAILURE); } munmap (p, size); /\* unmap file \*/ return 1; /\* return success (fn could be void due to exit) \*/ } int main (int argc, char \*\*argv) { diskinfo (argc, argv); /\* call diskinfo function \*/ return 0; } ``` (**note:** your check of `if (argc < 2)` should really be done in the calling function, `main()` here. There is no reason to call `diskinfo` until you have validated you have a filename to open. You could actually refactor your code to check the arguments and `open` the file in `main()` and simply pass an open file-descriptor to `diskinfo` as a parameter) **Example Use/Output** ``` $ ./bin/mmapdiskinfo dat/captnjack.txt This is a tale Of <NAME> A Pirate So Brave On the Seven Seas. ``` Look things over and let me know if you have any questions. If you still cannot get your function to work, then post a MCVE so that we can help further. Upvotes: 2 [selected_answer]
2018/03/16
998
3,490
<issue_start>username_0: I want to move a NULL *std::unique\_ptr* to a *std::shared\_ptr*, like so: ``` std::unique_ptr test = nullptr; std::shared\_ptr test2 = std::move(test); ``` As far as I know it should be legal to do so, and it does run fine in Visual Studio 2015 and GCC. However, I can't do the same with a *std::unique\_ptr* which has a deleter declaration, like so: ``` std::unique_ptr test = nullptr; std::shared\_ptr test2 = std::move(test); ``` The code above will not compile in visual studio and will trigger the static assert failure "error C2338: unique\_ptr constructed with null deleter pointer". I can use a *std::function* deleter instead, in which case the static assert failure can be circumvented: ``` std::unique_ptr> test = nullptr; std::shared\_ptr test2 = std::move(test); ``` In this case the code compiles fine, but I get an abort as soon as the last *std::shared\_ptr* copy of *test2* is destroyed. Why are the latter two cases so problematic? Strangely enough, if I change the type of *test2* from *std::shared\_ptr* to *std::unique\_ptr*, the second case still triggers the static assert failure, but both case 1 and case 3 work just fine: ``` { std::unique_ptr test = nullptr; std::unique\_ptr test2 = std::move(test); // Works fine } { //std::unique\_ptr test = nullptr; // triggers a static assert failure //std::unique\_ptr test2 = std::move(test); } { std::unique\_ptr> test = nullptr; std::unique\_ptr> test2 = std::move(test); // Works fine } ```<issue_comment>username_1: The `unique_ptr` constructor you are trying to use, which default-constructs the deleter, is ill-formed (before C++17) or disabled by SFINAE (as of C++17) if the deleter type is a pointer, in order to stop you from accidentally creating a `unique_ptr` whose deleter is itself a null pointer. If you really want to create such a `unique_ptr`, you can do so by explicitly passing a null deleter: ``` std::unique_ptr test(nullptr, nullptr); ``` This `unique_ptr` object is not very useful, because it can't delete anything. By using a null `std::function` deleter, you've told the compiler "yes, I really want to shoot myself in the foot". Of course, when the last `std::shared_ptr` is destroyed, the null `std::function` is invoked, and undefined behaviour occurs. What else did you expect? Upvotes: 4 [selected_answer]<issue_comment>username_2: I'll echo Brian's answer and add that in situations like this where the function shouldn't be null, you can use a function *reference,* which are nonnullable like all C++ references, instead of function pointers. ``` void delete_float(float *f) {delete f;} using Deleter = void(float*); // Function pointers constexpr void(*fptr)(float*) = delete_float; constexpr Deleter *fptr_typedef = delete_float; constexpr auto fptr_auto = delete_float; constexpr auto *fptr_auto2 = delete_float; // Function references constexpr void(&fref)(float*) = delete_float; constexpr Deleter &fref_typedef = delete_float; constexpr auto &fref_auto = delete_float; ``` The one gotcha you need to keep in mind with function references is that lambdas implicitly convert to function pointers, but not function references, so you need to use the dereference operator `*` to convert a lambda to a function reference. ``` const Deleter *fptr_lambda = [](float *f) {delete f;}; // const Deleter &fref_lambda = [](float *f) {delete f;}; // error const Deleter &fref_lambda_fixed = *[](float *f) {delete f;}; ``` Upvotes: 1
2018/03/16
2,114
5,020
<issue_start>username_0: I'm trying to split a record by underscore. Originally it was about `_` and `.` as FS and only for the first column. But right now it appears that no splitting works at all? ``` cat test_file.tsv mg.reads.per.gene_bcsZ_A1.tsv contig_21128 476 mg.reads.per.gene_bcsZ_A1.tsv contig_3712 1774 mg.reads.per.gene_bcsZ_A2.tsv contig_38480 184 mg.reads.per.gene_bcsZ_A2.tsv contig_62779 1154 mg.reads.per.gene_bcsZ_A4.tsv contig_115486 113 mg.reads.per.gene_bcsZ_A4.tsv contig_14345 937 mg.reads.per.gene_bcsZ_A5.tsv contig_19362 426 mg.reads.per.gene_bcsZ_A5.tsv contig_53656 31 mg.reads.per.gene_bcsZ_A6.tsv contig_100190 26 mg.reads.per.gene_bcsZ_A6.tsv contig_23343 164 ``` and I've tried numerous variants such as ``` awk 'BEGIN { FS = _ } ; {print $0}' test_file.tsv awk 'BEGIN { FS = '_' } ; {print $0}' test_file.tsv awk 'BEGIN { FS = "_" } ; {print $0}' test_file.tsv awk 'BEGIN { FS ="_" } ; {print $0}' test_file.tsv awk -F'_' '{print $0}' test_file.tsv awk -F"gene" '{print $0}' test_file.tsv ``` and it gives the unchanged output. I was expecting: ``` mg.reads.per.gene bcsZ A1.tsv contig 21128 476 mg.reads.per.gene bcsZ A1.tsv contig 3712 1774 mg.reads.per.gene bcsZ A2.tsv contig 38480 184 mg.reads.per.gene bcsZ A2.tsv contig 62779 1154 mg.reads.per.gene bcsZ A4.tsv contig 115486 113 mg.reads.per.gene bcsZ A4.tsv contig 14345 937 mg.reads.per.gene bcsZ A5.tsv contig 19362 426 mg.reads.per.gene bcsZ A5.tsv contig 53656 31 mg.reads.per.gene bcsZ A6.tsv contig 100190 26 mg.reads.per.gene bcsZ A6.tsv contig 23343 164 ``` Am I missing something obvious here? EDIT: yes I did: "It is a common error to try to change the field separators in a record simply by setting FS and OFS, and then expecting a plain ‘print’ or ‘print $0’ to print the modified record." (awk manual, "understanding $0") EDIT: and to reach the final goal (splitting by \_ and . only in the first column, this one works (at least in one line separated by ";"): ``` awk 'BEGIN { OFS = "\t" } { split ($1, a, "_") split (a[3], b, "\\.") print $2, a[2], b[1] }' test_file.tsv ``` output: ``` contig_21128 bcsZ A1 contig_3712 bcsZ A1 contig_38480 bcsZ A2 contig_62779 bcsZ A2 contig_115486 bcsZ A4 contig_14345 bcsZ A4 contig_19362 bcsZ A5 contig_53656 bcsZ A5 contig_100190 bcsZ A6 contig_23343 bcsZ A6 ```<issue_comment>username_1: You're misunderstanding the use of the field separator in Awk. Awk's field separator tells Awk what value to use to divide the columns in the input, where it defaults to whitespace. To help you understand what's going on, here's what you're currently doing (I've reduced the data file to just 3 lines so that it's easier to manage): ``` $awk -F "_" '{print $0}' test_file.tsv mg.reads.per.gene_bcsZ_A1.tsv contig_21128 476 mg.reads.per.gene_bcsZ_A1.tsv contig_3712 1774 mg.reads.per.gene_bcsZ_A2.tsv contig_38480 184 $awk -F "_" '{print $1}' test_file.tsv mg.reads.per.gene mg.reads.per.gene mg.reads.per.gene $awk -F "_" '{print $2}' test_file.tsv bcsZ bcsZ bcsZ $awk -F "_" '{print $3}' test_file.tsv A1.tsv contig A1.tsv contig A2.tsv contig $awk -F "_" '{print $4}' test_file.tsv 21128 476 3712 1774 38480 184 ``` See, you've divided your output into exactly 4 columns, broken up by every time there's an underscore, which are in awk as `$1`, `$2`, `$3`, and `$4`. Note that `$0` returns all the columns joined by the field separator, which looks just like your initial input. What you want is to swap all the underscores for spaces so that there appear to be 6 columns. This can be done extremely easily by the use of the `tr` command: ``` $ tr '_' ' ' < test_file.tsv mg.reads.per.gene bcsZ A1.tsv contig 21128 476 mg.reads.per.gene bcsZ A1.tsv contig 3712 1774 mg.reads.per.gene bcsZ A2.tsv contig 38480 184 ``` Now you've got your six columns, and you can feed the output into awk if you want after that for whatever else you want to do. Upvotes: 1 <issue_comment>username_2: `$0` is the whole line in awk. ``` awk -F_ '{$1=$1;print}' sample.csv ``` Input field separator is `_` and default output field separator is space. `{$1=$1;print}` rebuilds fields based on the output separator and print them all. output: ``` mg.reads.per.gene bcsZ A1.tsv contig 21128 476 mg.reads.per.gene bcsZ A1.tsv contig 3712 1774 mg.reads.per.gene bcsZ A2.tsv contig 38480 184 mg.reads.per.gene bcsZ A2.tsv contig 62779 1154 mg.reads.per.gene bcsZ A4.tsv contig 115486 113 mg.reads.per.gene bcsZ A4.tsv contig 14345 937 mg.reads.per.gene bcsZ A5.tsv contig 19362 426 mg.reads.per.gene bcsZ A5.tsv contig 53656 31 mg.reads.per.gene bcsZ A6.tsv contig 100190 26 mg.reads.per.gene bcsZ A6.tsv contig 23343 164 ``` Upvotes: 3 [selected_answer]
2018/03/16
631
2,014
<issue_start>username_0: I have a pdf file with multiple pages, but I am interested in only a subgroup of them. For example, my original PDF has 30 pages and I want only the pages 10 to 16. I tried using the function split\_pdf from tabulizer package, that only splits the pdf page to page (resulting in 200 files, one for each page), followed by merge\_pdfs(which merge pdf files). It worked properly, but is taking ages (and I have around 2000 pdf files I have to split). This is the code I am using: ``` split = split_pdf('file_path') start = 10 end = 16 merge_pdfs(split[start:end], 'saving_path') ``` I couldn't find any better option to do this. Any help would appreciated.<issue_comment>username_1: Unfortunatly, I find it a bit unclear what kind of data is in your PDF and what you are trying to extract from it. So I outline two approaches. 1. If you have tables in the pdf, you should be able to extract the data from said pages using using: `tab <- tabulizer::extract_tables(file = "path/file.pdf", pages = 10:16)` 2. If you only want the text, you should use `pdftools` which is a lot faster: `text <- pdftools::pdf_text("path/file.pdf")[10:16]` Upvotes: 4 [selected_answer]<issue_comment>username_2: Install `pdftk` (if you don't already have it). Assuming it is on your path and `myfile.pdf` is in the current directory run this from R: ``` system("pdftk myfile.pdf cat 10-16 output myfile_10to16.pdf") ``` Upvotes: 2 <issue_comment>username_3: As an accessory to [G.Grothendieck's answer](https://stackoverflow.com/a/49330610/10647267), one could also use the package `staplr`, which is an R wrapper around the program `pdftk`: ``` library('staplr') staplr::select_pages( selpages = 10:16, input_filepath = 'file_path', output_filepath = 'saving_path') ``` In my experience, plain `pdftk` works faster. But, if you need to do something complex and you are more familiar with R syntax than with bash syntax, using the `staplr` package will save on coding time. Upvotes: 2
2018/03/16
2,244
6,712
<issue_start>username_0: VScodeDebugGoAppEngine ====================== Hello World tutorial that shows how to setup VS Code to debug Golang App Engine code with Visual Studio (aka VScode ) This is using using the Helloworld code from AppEngine documentation: ``` go get -u -d github.com/directmeasure/VScodeDebugGoAppEngine.git ``` on a Mac running osX 10.13.3. I've tested the code and the server works locally. I'm trying to figure out how to enter into the code with the debugger so I can learn how to use the debugger on other projects. These were the best instructions I could find for using VScode with GAE but they seem to be outdated based on updates to Golang(e.g. switch to Gcloud, -go\_debugging flag and change of directory structure): <https://medium.com/@dbenque/debugging-golang-appengine-module-with-visual-studio-code-85b3aa59e0f> Here are the steps I took: set up Environment ================== * added to .bash\_profile ``` export BASEFOLDER="/Users/Bryan/google-cloud-sdk/" . export GOROOT="/usr/local/go" # this shoudln't have to be set with current Version, doing it to follow the tutorial . ``` How I have attempted to get debugger to run: start local server . ==================== ``` dev_appserver.py --go_debugging=true app.yaml ``` attach local binary to Delve ============================ ``` ps aux | grep _go_app dlv attach <#using the PID from the server binary> ``` Delve successfully attaches to the binary. When I start the **Debug session**, the blue progress bar never stops scanning horizontally. The VARIABLE sidebar is never populated with the variables in hello.go The Breakpoint is set at hello.go: line 21 The Debug REPL terminal displays: ``` Verbose logs are written to: /var/<KEY>00gp/T/vscode-go-debug.txt 16:02:31, 2018-4-5 InitializeRequest InitializeResponse Using GOPATH: /Users/Bryan/go fmt.Print(u) Please start a debug session to evaluate ``` Here is the launch.json config: ``` { "version": "0.2.0", "configurations": [ { "name": "Launch", "type": "go", "request": "launch", "mode": "debug", "remotePath": "", //"port": 1234, "port": 2345 // docs say port should match assigned port headless server, https://github.com/Microsoft/vscode-go/wiki/Debugging-Go-code-using-VS-Code#remote-debugging // this creates bind error "host": "127.0.0.1", "program": "${workspaceFolder}/hello.go", "env": {}, "args": [], "showLog": true, "trace": true, } ] } ``` Here are the versions I have installed: ``` go version go1.10 darwin/amd64 $ gcloud version . Google Cloud SDK 197.0.0 app-engine-go app-engine-python 1.9.68 bq 2.0.31 core 2018.04.06 gsutil 4.30 VS code extension: Go 0.6.78 ``` EDIT########################### ``` $ lsof -n -i :8080 Bryan@Bryans-MacBook-Pro Thu Apr 12 17:02:04 ~ $ lsof -n -i :2345 Bryan@Bryans-MacBook-Pro Thu Apr 12 17:03:34 ~ $ ps aux | grep _go_app Bryan 7433 0.0 0.0 2434840 800 s000 S+ 5:03PM 0:00.00 grep _go_app Bryan 7426 0.0 0.0 556603172 3896 s002 S+ 5:02PM 0:00.01 /<KEY>go-bin/_go_app Bryan@Bryans-MacBook-Pro Thu Apr 12 17:03:52 ~ $ dlv attach --headless -l "localhost:2345" 7426 /var/folders/mw/0y88j8_54bjc93d_lg3120qw0000gp/T/tmp8GWk1gappengine-go-bin/_go_app API server listening at: 127.0.0.1:2345 ``` When I start the Debugger, REPL shows: ``` Verbose logs are written to: /var/<KEY>/vscode-go-debug.txt couldn't start listener: listen tcp 127.0.0.1:2345: bind: address already in use Process exiting with code: 1 ```<issue_comment>username_1: Yes, it is outdated. The page you are getting from, does not exist. Instead, you can run ``` go get github.com/GoogleCloudPlatform/golang-samples/tree/master/appengine/helloworld/... ``` Upvotes: 1 <issue_comment>username_2: VS Code never attaches to Delve because it is waiting to connect to the remote Delve server at `127.0.0.1:2345`. If you `dlv attach` in headless mode, listening at the correct address, you should hopefully be able to connect. The steps below describe how to debug a Go App Engine application running with `dev_appserver.py` and no other tools/helpers. However, when you make changes to your Go code, `dev_appserver.py` recompiles and restarts the application, changing the PID Delve needs to debug. <http://github.com/dbenque/delveAppengine> can help keep Delve attached to the right process. See [here](https://medium.com/@dbenque/debugging-golang-appengine-module-with-visual-studio-code-85b3aa59e0f) for a tutorial. 1. Install the [VS Code Go extension](https://code.visualstudio.com/docs/languages/go). 2. `go get -u -d github.com/directmeasure/VScodeDebugGoAppEngine.git` 3. `cd $GOPATH/src/src/github.com/GoogleCloudPlatform/golang-samples/appengine/helloworld` Note: if your GOPATH has more than one entry, `cd` to the directory `go get` downloaded to. 4. Start the App Engine development server: `dev_appserver.py --go_debugging=true app.yaml` 5. Visit <http://localhost:8080> to ensure the server is running. 6. Find the PID of the Go process: `ps aux | grep _go_app` 7. Start the Delve server (select any port available on your system): `dlv --headless -l "localhost:2345" attach $GO_APP_PID` 8. Open the VS Code debug tab (⇧⌘D on macOS, Ctrl+Shift+D on Windows & Linux). 9. Create a new launch configuration by clicking the gear and selecting any entry (see official docs [here](https://code.visualstudio.com/docs/editor/debugging#_launch-configurations)). 10. Create a "Go: Connect to server" entry:![create config dropdown](https://i.stack.imgur.com/mxCrd.png) Note: this is only a template - you can edit it later. 11. Customize the config to point to the port you specified when starting Delve. Here is my full configuration: ``` { "name": "Launch", "type": "go", "request": "launch", "mode": "debug", "remotePath": "", "port": 2345, "host": "127.0.0.1", "program": "${fileDirname}", "env": {}, "args": [], "showLog": true } ``` 12. Add breakpoints as desired and visit <http://localhost:8080> again. Execution should stop when a breakpoint is reached, variables should be listed in the variables section in VS Code, and the call stack should be in the call stack section. For general help with debugging Go code in VS Code (not run with App Engine), see <https://github.com/Microsoft/vscode-go/wiki/Debugging-Go-code-using-VS-Code>. Upvotes: 5 [selected_answer]
2018/03/16
294
1,090
<issue_start>username_0: ``` db.collection("accounts").findOne({Nickname: { $regex : new RegExp(player, "i") }}, function(err, result) { } ``` Is what I currently have, the problem is that I get everything that is a substring of the player variable. But I want only exact, albeit case-insensitive matches.<issue_comment>username_1: Prefix `player` with ^ and suffix with $, so that it matches the entire string `"^" + player + "$"` ^ matches the start of the string $ matches the end of the string Reference for [boundary regex characters](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/RegExp#boundaries) So with this expression you're saying "find me Nickname which starts and ends with this string" i.e. the entire string Upvotes: 3 [selected_answer]<issue_comment>username_2: Your regex actually asks for a substring. To ask for an exact string: ``` new RegExp("^" + player + "$", "i") ``` (Keep everything else the same.) The `^` matches the beginning of the input, the `$` matches the end. This way, any substrings won't match. Upvotes: 3
2018/03/16
2,028
4,079
<issue_start>username_0: I want to scrape data from specific rows of [this table](https://www.n2yo.com/passes/?s=39090&a=1). I want the orange/gold rows only. Previously, I used this code provided by SIM to scrape the whole table information and I manipulated it afterwards: ``` from selenium.webdriver import Chrome from contextlib import closing from selenium.webdriver.chrome.options import Options from bs4 import BeautifulSoup URL = "https://www.n2yo.com/passes/?s=39090&a=1" chrome_options = Options() chrome_options.add_argument("--headless") with closing(Chrome(chrome_options=chrome_options)) as driver: driver.get(URL) soup = BeautifulSoup(driver.page_source, 'lxml') for items in soup.select("#passestable tr"): data = [item.text for item in items.select("th,td")] print(data) ``` I'm unsure how to alter this code to obtain only the orange/gold rows. I tried searching for the colour code as a tag when parsing but it didn't work. Any and all suggestions appreciated. Thank you for your time.<issue_comment>username_1: Try to replace this line ``` for items in soup.select("#passestable tr"): ``` with this one ``` for items in soup.select("#passestable tr[bgcolor='#FFCC00'], #passestable tr[bgcolor='#FFFF33']"): ``` To iterate through the `tr` nodes of only required colors Note that this will return all the orange nodes and only then all the gold nodes Upvotes: 2 <issue_comment>username_2: You can use regex to match the colors: ``` from selenium import webdriver from bs4 import BeautifulSoup as soup import re d = driver.Chrome() d.get("https://www.n2yo.com/passes/?s=39090&a=1") s = soup(d.page_source, 'lxml') data = [i.text for i in s.find_all('tr', {'bgcolor':re.compile('#FFFFFF|#FFFF33|#FFCC00')})] ``` Output: ``` [u'16-Mar 20:34N12\xb020:42W265\xb079\xb020:48SSW199\xb0-Map and details', u'17-Mar 07:51S178\xb007:58W260\xb052\xb008:05NNW341\xb0-Map and details', u'17-Mar 20:00NNE19\xb020:08E102\xb050\xb020:14S180\xb0-Map and details', u'18-Mar 07:17SSE160\xb007:24E83\xb077\xb007:31N349\xb0-Map and details', u'18-Mar 08:58SW217\xb009:04W269\xb013\xb009:09NW323\xb0-Map and details', u'18-Mar 21:06N6\xb021:13WNW295\xb041\xb021:19SW217\xb0-Map and details', u'19-Mar 06:43SE142\xb006:50ENE67\xb038\xb006:57N356\xb0-Map and details', u'19-Mar 08:23SSW196\xb008:30W268\xb027\xb008:36NNW333\xb0-Map and details', u'19-Mar 20:32N12\xb020:39WNW286\xb084\xb020:46SSW198\xb0-Map and details', u'20-Mar 07:48S177\xb007:55WSW254\xb055\xb008:02NNW342\xb0-Map and details', u'20-Mar 19:58NNE20\xb020:05E98\xb047\xb020:12S178\xb0-Map and details', u'21-Mar 07:14SSE159\xb007:22NE58\xb072\xb007:28N349\xb0-Map and details', u'21-Mar 08:55SW216\xb009:01W272\xb014\xb009:07NW325\xb0-Map and details', u'21-Mar 21:03N6\xb021:10WNW288\xb043\xb021:17SW215\xb0-Map and details', u'22-Mar 06:41SE141\xb006:48ENE70\xb036\xb006:54N356\xb0-Map and details', u'22-Mar 08:20S194\xb008:27W265\xb029\xb008:34NNW335\xb0-Map and details', u'22-Mar 20:29N13\xb020:36N348\xb086\xb020:43SSW196\xb0-Map and details', u'23-Mar 07:46S176\xb007:53W265\xb059\xb008:00NNW343\xb0-Map and details', u'23-Mar 19:55NNE20\xb020:02E94\xb045\xb020:09S177\xb0-Map and details', u'24-Mar 07:12SSE157\xb007:19ENE71\xb069\xb007:26N350\xb0-Map and details', u'24-Mar 08:53SW214\xb008:59W270\xb015\xb009:04NW325\xb0-Map and details', u'24-Mar 21:01N7\xb021:08WNW292\xb046\xb021:14SW214\xb0-Map and details', u'25-Mar 06:38SE139\xb006:45ENE65\xb034\xb006:52N357\xb0-Map and details', u'25-Mar 08:18S193\xb008:24W263\xb030\xb008:31NNW335\xb0-Map and details', u'25-Mar 18:49NE39\xb018:54E87\xb010\xb018:59SE134\xb0-Map and details', u'25-Mar 20:27N13\xb020:34SSE161\xb086\xb020:41S195\xb0-Map and details'] ``` Upvotes: 2 <issue_comment>username_3: Another approach that you can try which is not using `selenium`: ``` from lxml.html import fromstring import requests r = requests.get(URL) html = fromstring((r.content).decode('utf-8')) # only orange and yellow rows rows = html.xpath('//tr[@bgcolor="#FFFF33" or @bgcolor="#FFCC00"]') ``` Upvotes: 2